Dec 13 03:50:23.055993 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 03:50:23.056086 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 03:50:23.056113 kernel: BIOS-provided physical RAM map: Dec 13 03:50:23.056127 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 03:50:23.056139 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 03:50:23.056152 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 03:50:23.056166 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Dec 13 03:50:23.056179 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Dec 13 03:50:23.056194 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 03:50:23.056206 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 03:50:23.056219 kernel: NX (Execute Disable) protection: active Dec 13 03:50:23.056231 kernel: SMBIOS 2.8 present. Dec 13 03:50:23.056243 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Dec 13 03:50:23.056255 kernel: Hypervisor detected: KVM Dec 13 03:50:23.056271 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 03:50:23.056286 kernel: kvm-clock: cpu 0, msr 2f19b001, primary cpu clock Dec 13 03:50:23.056299 kernel: kvm-clock: using sched offset of 6462067844 cycles Dec 13 03:50:23.056314 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 03:50:23.056328 kernel: tsc: Detected 1996.249 MHz processor Dec 13 03:50:23.056342 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 03:50:23.056356 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 03:50:23.056386 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 13 03:50:23.056400 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 03:50:23.056416 kernel: ACPI: Early table checksum verification disabled Dec 13 03:50:23.056430 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Dec 13 03:50:23.056443 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 03:50:23.056457 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 03:50:23.056471 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 03:50:23.056484 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 13 03:50:23.056499 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 03:50:23.056512 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 03:50:23.056526 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Dec 13 03:50:23.056543 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Dec 13 03:50:23.056556 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 13 03:50:23.056569 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Dec 13 03:50:23.056583 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Dec 13 03:50:23.056596 kernel: No NUMA configuration found Dec 13 03:50:23.056609 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Dec 13 03:50:23.056623 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Dec 13 03:50:23.056637 kernel: Zone ranges: Dec 13 03:50:23.056660 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 03:50:23.056674 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Dec 13 03:50:23.056688 kernel: Normal empty Dec 13 03:50:23.056702 kernel: Movable zone start for each node Dec 13 03:50:23.056716 kernel: Early memory node ranges Dec 13 03:50:23.056730 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 03:50:23.056747 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Dec 13 03:50:23.056761 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Dec 13 03:50:23.056775 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 03:50:23.056789 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 03:50:23.056803 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Dec 13 03:50:23.056817 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 03:50:23.056831 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 03:50:23.056845 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 03:50:23.056859 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 03:50:23.056875 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 03:50:23.056889 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 03:50:23.056904 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 03:50:23.056918 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 03:50:23.056932 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 03:50:23.056946 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 03:50:23.056960 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 13 03:50:23.056974 kernel: Booting paravirtualized kernel on KVM Dec 13 03:50:23.056988 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 03:50:23.057003 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 03:50:23.057019 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 03:50:23.057077 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 03:50:23.057094 kernel: pcpu-alloc: [0] 0 1 Dec 13 03:50:23.057108 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Dec 13 03:50:23.057122 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 03:50:23.057136 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Dec 13 03:50:23.057150 kernel: Policy zone: DMA32 Dec 13 03:50:23.057167 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 03:50:23.057182 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 03:50:23.057192 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 03:50:23.057203 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 03:50:23.057213 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 03:50:23.057221 kernel: Memory: 1973284K/2096620K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 123076K reserved, 0K cma-reserved) Dec 13 03:50:23.057230 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 03:50:23.057238 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 03:50:23.057246 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 03:50:23.057255 kernel: rcu: Hierarchical RCU implementation. Dec 13 03:50:23.057264 kernel: rcu: RCU event tracing is enabled. Dec 13 03:50:23.057272 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 03:50:23.057281 kernel: Rude variant of Tasks RCU enabled. Dec 13 03:50:23.057289 kernel: Tracing variant of Tasks RCU enabled. Dec 13 03:50:23.057297 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 03:50:23.057305 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 03:50:23.057313 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 03:50:23.057321 kernel: Console: colour VGA+ 80x25 Dec 13 03:50:23.057330 kernel: printk: console [tty0] enabled Dec 13 03:50:23.057339 kernel: printk: console [ttyS0] enabled Dec 13 03:50:23.057347 kernel: ACPI: Core revision 20210730 Dec 13 03:50:23.057355 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 03:50:23.057363 kernel: x2apic enabled Dec 13 03:50:23.057371 kernel: Switched APIC routing to physical x2apic. Dec 13 03:50:23.057379 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 03:50:23.057387 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 03:50:23.057395 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Dec 13 03:50:23.057403 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 03:50:23.057413 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 03:50:23.057421 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 03:50:23.057429 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 03:50:23.057437 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 03:50:23.057445 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 03:50:23.057453 kernel: Speculative Store Bypass: Vulnerable Dec 13 03:50:23.057461 kernel: x86/fpu: x87 FPU will use FXSAVE Dec 13 03:50:23.057469 kernel: Freeing SMP alternatives memory: 32K Dec 13 03:50:23.057477 kernel: pid_max: default: 32768 minimum: 301 Dec 13 03:50:23.057487 kernel: LSM: Security Framework initializing Dec 13 03:50:23.057495 kernel: SELinux: Initializing. Dec 13 03:50:23.057503 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 03:50:23.057511 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 03:50:23.057519 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Dec 13 03:50:23.057527 kernel: Performance Events: AMD PMU driver. Dec 13 03:50:23.057535 kernel: ... version: 0 Dec 13 03:50:23.057543 kernel: ... bit width: 48 Dec 13 03:50:23.057551 kernel: ... generic registers: 4 Dec 13 03:50:23.057565 kernel: ... value mask: 0000ffffffffffff Dec 13 03:50:23.057574 kernel: ... max period: 00007fffffffffff Dec 13 03:50:23.057584 kernel: ... fixed-purpose events: 0 Dec 13 03:50:23.057592 kernel: ... event mask: 000000000000000f Dec 13 03:50:23.057600 kernel: signal: max sigframe size: 1440 Dec 13 03:50:23.057608 kernel: rcu: Hierarchical SRCU implementation. Dec 13 03:50:23.057617 kernel: smp: Bringing up secondary CPUs ... Dec 13 03:50:23.057625 kernel: x86: Booting SMP configuration: Dec 13 03:50:23.057635 kernel: .... node #0, CPUs: #1 Dec 13 03:50:23.057643 kernel: kvm-clock: cpu 1, msr 2f19b041, secondary cpu clock Dec 13 03:50:23.057652 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Dec 13 03:50:23.057660 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 03:50:23.057668 kernel: smpboot: Max logical packages: 2 Dec 13 03:50:23.057677 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Dec 13 03:50:23.057686 kernel: devtmpfs: initialized Dec 13 03:50:23.057694 kernel: x86/mm: Memory block size: 128MB Dec 13 03:50:23.057702 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 03:50:23.057712 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 03:50:23.057721 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 03:50:23.057729 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 03:50:23.057738 kernel: audit: initializing netlink subsys (disabled) Dec 13 03:50:23.057746 kernel: audit: type=2000 audit(1734061822.684:1): state=initialized audit_enabled=0 res=1 Dec 13 03:50:23.057754 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 03:50:23.057763 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 03:50:23.057771 kernel: cpuidle: using governor menu Dec 13 03:50:23.057779 kernel: ACPI: bus type PCI registered Dec 13 03:50:23.057789 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 03:50:23.057798 kernel: dca service started, version 1.12.1 Dec 13 03:50:23.057806 kernel: PCI: Using configuration type 1 for base access Dec 13 03:50:23.057815 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 03:50:23.057823 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 03:50:23.057832 kernel: ACPI: Added _OSI(Module Device) Dec 13 03:50:23.057840 kernel: ACPI: Added _OSI(Processor Device) Dec 13 03:50:23.057848 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 03:50:23.057857 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 03:50:23.057867 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 03:50:23.057875 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 03:50:23.057884 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 03:50:23.057892 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 03:50:23.057901 kernel: ACPI: Interpreter enabled Dec 13 03:50:23.057909 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 03:50:23.057917 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 03:50:23.057926 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 03:50:23.057934 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 13 03:50:23.057944 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 03:50:23.058101 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 03:50:23.058198 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 03:50:23.058211 kernel: acpiphp: Slot [3] registered Dec 13 03:50:23.058220 kernel: acpiphp: Slot [4] registered Dec 13 03:50:23.058228 kernel: acpiphp: Slot [5] registered Dec 13 03:50:23.058237 kernel: acpiphp: Slot [6] registered Dec 13 03:50:23.058249 kernel: acpiphp: Slot [7] registered Dec 13 03:50:23.058257 kernel: acpiphp: Slot [8] registered Dec 13 03:50:23.058265 kernel: acpiphp: Slot [9] registered Dec 13 03:50:23.058274 kernel: acpiphp: Slot [10] registered Dec 13 03:50:23.058282 kernel: acpiphp: Slot [11] registered Dec 13 03:50:23.058291 kernel: acpiphp: Slot [12] registered Dec 13 03:50:23.058299 kernel: acpiphp: Slot [13] registered Dec 13 03:50:23.058307 kernel: acpiphp: Slot [14] registered Dec 13 03:50:23.058315 kernel: acpiphp: Slot [15] registered Dec 13 03:50:23.058324 kernel: acpiphp: Slot [16] registered Dec 13 03:50:23.058334 kernel: acpiphp: Slot [17] registered Dec 13 03:50:23.058343 kernel: acpiphp: Slot [18] registered Dec 13 03:50:23.058351 kernel: acpiphp: Slot [19] registered Dec 13 03:50:23.058359 kernel: acpiphp: Slot [20] registered Dec 13 03:50:23.058368 kernel: acpiphp: Slot [21] registered Dec 13 03:50:23.058376 kernel: acpiphp: Slot [22] registered Dec 13 03:50:23.058384 kernel: acpiphp: Slot [23] registered Dec 13 03:50:23.058393 kernel: acpiphp: Slot [24] registered Dec 13 03:50:23.058401 kernel: acpiphp: Slot [25] registered Dec 13 03:50:23.058411 kernel: acpiphp: Slot [26] registered Dec 13 03:50:23.058419 kernel: acpiphp: Slot [27] registered Dec 13 03:50:23.058427 kernel: acpiphp: Slot [28] registered Dec 13 03:50:23.058435 kernel: acpiphp: Slot [29] registered Dec 13 03:50:23.058444 kernel: acpiphp: Slot [30] registered Dec 13 03:50:23.058452 kernel: acpiphp: Slot [31] registered Dec 13 03:50:23.058460 kernel: PCI host bridge to bus 0000:00 Dec 13 03:50:23.058558 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 03:50:23.058647 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 03:50:23.058724 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 03:50:23.058796 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 03:50:23.058866 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 13 03:50:23.058938 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 03:50:23.062062 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 03:50:23.062181 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 03:50:23.062287 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 13 03:50:23.062378 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Dec 13 03:50:23.062467 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 03:50:23.062554 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 03:50:23.062641 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 03:50:23.062740 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 03:50:23.062835 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 03:50:23.062928 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 13 03:50:23.063016 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 13 03:50:23.063149 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 13 03:50:23.063239 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 13 03:50:23.063339 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 13 03:50:23.063427 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Dec 13 03:50:23.063519 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Dec 13 03:50:23.063606 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 03:50:23.063702 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 03:50:23.063789 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Dec 13 03:50:23.063871 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Dec 13 03:50:23.063951 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 13 03:50:23.066194 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Dec 13 03:50:23.066351 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 03:50:23.066447 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 03:50:23.066537 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Dec 13 03:50:23.066625 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 13 03:50:23.066716 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Dec 13 03:50:23.066798 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Dec 13 03:50:23.066880 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 13 03:50:23.066972 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 03:50:23.067122 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Dec 13 03:50:23.067206 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 13 03:50:23.067218 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 03:50:23.067226 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 03:50:23.067235 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 03:50:23.067243 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 03:50:23.067251 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 03:50:23.067262 kernel: iommu: Default domain type: Translated Dec 13 03:50:23.067270 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 03:50:23.067349 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 13 03:50:23.067428 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 03:50:23.067509 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 13 03:50:23.067520 kernel: vgaarb: loaded Dec 13 03:50:23.067529 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 03:50:23.067538 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 03:50:23.067547 kernel: PTP clock support registered Dec 13 03:50:23.067559 kernel: PCI: Using ACPI for IRQ routing Dec 13 03:50:23.067568 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 03:50:23.067576 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 03:50:23.067585 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Dec 13 03:50:23.067593 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 03:50:23.067601 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 03:50:23.067610 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 03:50:23.067618 kernel: pnp: PnP ACPI init Dec 13 03:50:23.067706 kernel: pnp 00:03: [dma 2] Dec 13 03:50:23.067722 kernel: pnp: PnP ACPI: found 5 devices Dec 13 03:50:23.067731 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 03:50:23.067739 kernel: NET: Registered PF_INET protocol family Dec 13 03:50:23.067748 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 03:50:23.067757 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 03:50:23.067766 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 03:50:23.067774 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 03:50:23.067783 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 03:50:23.067793 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 03:50:23.067802 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 03:50:23.067810 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 03:50:23.067819 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 03:50:23.067827 kernel: NET: Registered PF_XDP protocol family Dec 13 03:50:23.067905 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 03:50:23.067981 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 03:50:23.068075 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 03:50:23.068152 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 03:50:23.068232 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 13 03:50:23.068317 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 13 03:50:23.068416 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 03:50:23.068502 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Dec 13 03:50:23.068514 kernel: PCI: CLS 0 bytes, default 64 Dec 13 03:50:23.068523 kernel: Initialise system trusted keyrings Dec 13 03:50:23.068532 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 03:50:23.068543 kernel: Key type asymmetric registered Dec 13 03:50:23.068552 kernel: Asymmetric key parser 'x509' registered Dec 13 03:50:23.068560 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 03:50:23.068569 kernel: io scheduler mq-deadline registered Dec 13 03:50:23.068577 kernel: io scheduler kyber registered Dec 13 03:50:23.068586 kernel: io scheduler bfq registered Dec 13 03:50:23.068594 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 03:50:23.068603 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 13 03:50:23.068612 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 03:50:23.068621 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 03:50:23.068631 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 03:50:23.068639 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 03:50:23.068648 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 03:50:23.068656 kernel: random: crng init done Dec 13 03:50:23.068665 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 03:50:23.068674 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 03:50:23.068682 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 03:50:23.068782 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 03:50:23.068799 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 03:50:23.068878 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 03:50:23.068956 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T03:50:22 UTC (1734061822) Dec 13 03:50:23.069056 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 13 03:50:23.069070 kernel: NET: Registered PF_INET6 protocol family Dec 13 03:50:23.069078 kernel: Segment Routing with IPv6 Dec 13 03:50:23.069087 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 03:50:23.069095 kernel: NET: Registered PF_PACKET protocol family Dec 13 03:50:23.069104 kernel: Key type dns_resolver registered Dec 13 03:50:23.069117 kernel: IPI shorthand broadcast: enabled Dec 13 03:50:23.069125 kernel: sched_clock: Marking stable (751507603, 119624959)->(943974284, -72841722) Dec 13 03:50:23.069134 kernel: registered taskstats version 1 Dec 13 03:50:23.069142 kernel: Loading compiled-in X.509 certificates Dec 13 03:50:23.069151 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 03:50:23.069159 kernel: Key type .fscrypt registered Dec 13 03:50:23.069168 kernel: Key type fscrypt-provisioning registered Dec 13 03:50:23.069176 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 03:50:23.069192 kernel: ima: Allocated hash algorithm: sha1 Dec 13 03:50:23.069201 kernel: ima: No architecture policies found Dec 13 03:50:23.069209 kernel: clk: Disabling unused clocks Dec 13 03:50:23.069218 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 03:50:23.069226 kernel: Write protecting the kernel read-only data: 28672k Dec 13 03:50:23.069235 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 03:50:23.069243 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 03:50:23.069252 kernel: Run /init as init process Dec 13 03:50:23.069260 kernel: with arguments: Dec 13 03:50:23.069270 kernel: /init Dec 13 03:50:23.069279 kernel: with environment: Dec 13 03:50:23.069287 kernel: HOME=/ Dec 13 03:50:23.069295 kernel: TERM=linux Dec 13 03:50:23.069303 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 03:50:23.069315 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 03:50:23.069326 systemd[1]: Detected virtualization kvm. Dec 13 03:50:23.069335 systemd[1]: Detected architecture x86-64. Dec 13 03:50:23.069347 systemd[1]: Running in initrd. Dec 13 03:50:23.069356 systemd[1]: No hostname configured, using default hostname. Dec 13 03:50:23.069365 systemd[1]: Hostname set to . Dec 13 03:50:23.069374 systemd[1]: Initializing machine ID from VM UUID. Dec 13 03:50:23.069384 systemd[1]: Queued start job for default target initrd.target. Dec 13 03:50:23.069393 systemd[1]: Started systemd-ask-password-console.path. Dec 13 03:50:23.069402 systemd[1]: Reached target cryptsetup.target. Dec 13 03:50:23.069411 systemd[1]: Reached target paths.target. Dec 13 03:50:23.069422 systemd[1]: Reached target slices.target. Dec 13 03:50:23.069431 systemd[1]: Reached target swap.target. Dec 13 03:50:23.069440 systemd[1]: Reached target timers.target. Dec 13 03:50:23.069450 systemd[1]: Listening on iscsid.socket. Dec 13 03:50:23.069459 systemd[1]: Listening on iscsiuio.socket. Dec 13 03:50:23.069468 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 03:50:23.069477 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 03:50:23.069488 systemd[1]: Listening on systemd-journald.socket. Dec 13 03:50:23.069497 systemd[1]: Listening on systemd-networkd.socket. Dec 13 03:50:23.069506 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 03:50:23.069515 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 03:50:23.069524 systemd[1]: Reached target sockets.target. Dec 13 03:50:23.069545 systemd[1]: Starting kmod-static-nodes.service... Dec 13 03:50:23.069557 systemd[1]: Finished network-cleanup.service. Dec 13 03:50:23.069569 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 03:50:23.069579 systemd[1]: Starting systemd-journald.service... Dec 13 03:50:23.069588 systemd[1]: Starting systemd-modules-load.service... Dec 13 03:50:23.069598 systemd[1]: Starting systemd-resolved.service... Dec 13 03:50:23.069607 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 03:50:23.069617 systemd[1]: Finished kmod-static-nodes.service. Dec 13 03:50:23.069627 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 03:50:23.069640 systemd-journald[185]: Journal started Dec 13 03:50:23.069694 systemd-journald[185]: Runtime Journal (/run/log/journal/9be88b6daeca4c6d9ec0e91a4bfefd38) is 4.9M, max 39.5M, 34.5M free. Dec 13 03:50:23.032356 systemd-modules-load[186]: Inserted module 'overlay' Dec 13 03:50:23.094338 systemd[1]: Started systemd-journald.service. Dec 13 03:50:23.094374 kernel: audit: type=1130 audit(1734061823.089:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:23.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:23.081632 systemd-resolved[187]: Positive Trust Anchors: Dec 13 03:50:23.098509 kernel: audit: type=1130 audit(1734061823.094:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:23.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:23.081643 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 03:50:23.105409 kernel: audit: type=1130 audit(1734061823.099:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:23.105425 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 03:50:23.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:23.081682 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 03:50:23.111435 kernel: audit: type=1130 audit(1734061823.105:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:23.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:23.084385 systemd-resolved[187]: Defaulting to hostname 'linux'. Dec 13 03:50:23.094848 systemd[1]: Started systemd-resolved.service. Dec 13 03:50:23.099301 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 03:50:23.122125 kernel: Bridge firewalling registered Dec 13 03:50:23.106139 systemd[1]: Reached target nss-lookup.target. Dec 13 03:50:23.112790 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 03:50:23.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:23.114099 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 03:50:23.128517 kernel: audit: type=1130 audit(1734061823.124:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:23.115418 systemd-modules-load[186]: Inserted module 'br_netfilter' Dec 13 03:50:23.123755 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 03:50:23.137611 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 03:50:23.138872 systemd[1]: Starting dracut-cmdline.service... Dec 13 03:50:23.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:23.144067 kernel: audit: type=1130 audit(1734061823.138:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:23.150329 dracut-cmdline[204]: dracut-dracut-053 Dec 13 03:50:23.152761 dracut-cmdline[204]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 03:50:23.155504 kernel: SCSI subsystem initialized Dec 13 03:50:23.170062 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 03:50:23.173482 kernel: device-mapper: uevent: version 1.0.3 Dec 13 03:50:23.173519 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 03:50:23.178561 systemd-modules-load[186]: Inserted module 'dm_multipath' Dec 13 03:50:23.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:23.179460 systemd[1]: Finished systemd-modules-load.service. Dec 13 03:50:23.185160 kernel: audit: type=1130 audit(1734061823.179:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:23.180766 systemd[1]: Starting systemd-sysctl.service... Dec 13 03:50:23.192066 systemd[1]: Finished systemd-sysctl.service. Dec 13 03:50:23.197193 kernel: audit: type=1130 audit(1734061823.192:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:23.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:23.230116 kernel: Loading iSCSI transport class v2.0-870. Dec 13 03:50:23.251104 kernel: iscsi: registered transport (tcp) Dec 13 03:50:23.277811 kernel: iscsi: registered transport (qla4xxx) Dec 13 03:50:23.277870 kernel: QLogic iSCSI HBA Driver Dec 13 03:50:23.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:23.330705 systemd[1]: Finished dracut-cmdline.service. Dec 13 03:50:23.337255 kernel: audit: type=1130 audit(1734061823.331:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:23.333690 systemd[1]: Starting dracut-pre-udev.service... Dec 13 03:50:23.421204 kernel: raid6: sse2x4 gen() 12303 MB/s Dec 13 03:50:23.438133 kernel: raid6: sse2x4 xor() 7047 MB/s Dec 13 03:50:23.455129 kernel: raid6: sse2x2 gen() 14298 MB/s Dec 13 03:50:23.472131 kernel: raid6: sse2x2 xor() 8802 MB/s Dec 13 03:50:23.489128 kernel: raid6: sse2x1 gen() 11332 MB/s Dec 13 03:50:23.506872 kernel: raid6: sse2x1 xor() 7018 MB/s Dec 13 03:50:23.506929 kernel: raid6: using algorithm sse2x2 gen() 14298 MB/s Dec 13 03:50:23.506957 kernel: raid6: .... xor() 8802 MB/s, rmw enabled Dec 13 03:50:23.507732 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 03:50:23.523129 kernel: xor: measuring software checksum speed Dec 13 03:50:23.523187 kernel: prefetch64-sse : 18347 MB/sec Dec 13 03:50:23.525172 kernel: generic_sse : 15614 MB/sec Dec 13 03:50:23.525229 kernel: xor: using function: prefetch64-sse (18347 MB/sec) Dec 13 03:50:23.643096 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 03:50:23.659104 systemd[1]: Finished dracut-pre-udev.service. Dec 13 03:50:23.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:23.659000 audit: BPF prog-id=7 op=LOAD Dec 13 03:50:23.660000 audit: BPF prog-id=8 op=LOAD Dec 13 03:50:23.660595 systemd[1]: Starting systemd-udevd.service... Dec 13 03:50:23.674320 systemd-udevd[386]: Using default interface naming scheme 'v252'. Dec 13 03:50:23.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:23.679116 systemd[1]: Started systemd-udevd.service. Dec 13 03:50:23.684619 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 03:50:23.709638 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Dec 13 03:50:23.756848 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 03:50:23.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:23.758288 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 03:50:23.815008 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 03:50:23.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:23.892336 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Dec 13 03:50:23.914985 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 03:50:23.915011 kernel: GPT:17805311 != 41943039 Dec 13 03:50:23.915024 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 03:50:23.915052 kernel: GPT:17805311 != 41943039 Dec 13 03:50:23.915064 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 03:50:23.915075 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 03:50:23.934069 kernel: libata version 3.00 loaded. Dec 13 03:50:23.954060 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (444) Dec 13 03:50:23.954614 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 03:50:23.989555 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 13 03:50:23.989707 kernel: scsi host0: ata_piix Dec 13 03:50:23.989816 kernel: scsi host1: ata_piix Dec 13 03:50:23.989919 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Dec 13 03:50:23.989932 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Dec 13 03:50:23.997175 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 03:50:24.001164 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 03:50:24.002456 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 03:50:24.008475 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 03:50:24.010394 systemd[1]: Starting disk-uuid.service... Dec 13 03:50:24.023324 disk-uuid[462]: Primary Header is updated. Dec 13 03:50:24.023324 disk-uuid[462]: Secondary Entries is updated. Dec 13 03:50:24.023324 disk-uuid[462]: Secondary Header is updated. Dec 13 03:50:24.031070 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 03:50:24.043198 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 03:50:25.055085 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 03:50:25.055329 disk-uuid[463]: The operation has completed successfully. Dec 13 03:50:25.117712 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 03:50:25.119600 systemd[1]: Finished disk-uuid.service. Dec 13 03:50:25.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:25.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:25.138994 systemd[1]: Starting verity-setup.service... Dec 13 03:50:25.164221 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Dec 13 03:50:25.267571 systemd[1]: Found device dev-mapper-usr.device. Dec 13 03:50:25.271984 systemd[1]: Mounting sysusr-usr.mount... Dec 13 03:50:25.278856 systemd[1]: Finished verity-setup.service. Dec 13 03:50:25.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:25.418054 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 03:50:25.419159 systemd[1]: Mounted sysusr-usr.mount. Dec 13 03:50:25.420511 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 03:50:25.422269 systemd[1]: Starting ignition-setup.service... Dec 13 03:50:25.424932 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 03:50:25.439092 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 03:50:25.439193 kernel: BTRFS info (device vda6): using free space tree Dec 13 03:50:25.439222 kernel: BTRFS info (device vda6): has skinny extents Dec 13 03:50:25.459221 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 03:50:25.473917 systemd[1]: Finished ignition-setup.service. Dec 13 03:50:25.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:25.477385 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 03:50:25.563806 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 03:50:25.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:25.570000 audit: BPF prog-id=9 op=LOAD Dec 13 03:50:25.571749 systemd[1]: Starting systemd-networkd.service... Dec 13 03:50:25.611231 systemd-networkd[634]: lo: Link UP Dec 13 03:50:25.611244 systemd-networkd[634]: lo: Gained carrier Dec 13 03:50:25.611738 systemd-networkd[634]: Enumeration completed Dec 13 03:50:25.611961 systemd-networkd[634]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 03:50:25.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:25.613959 systemd-networkd[634]: eth0: Link UP Dec 13 03:50:25.613969 systemd-networkd[634]: eth0: Gained carrier Dec 13 03:50:25.614026 systemd[1]: Started systemd-networkd.service. Dec 13 03:50:25.614573 systemd[1]: Reached target network.target. Dec 13 03:50:25.616552 systemd[1]: Starting iscsiuio.service... Dec 13 03:50:25.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:25.627435 systemd[1]: Started iscsiuio.service. Dec 13 03:50:25.628578 systemd[1]: Starting iscsid.service... Dec 13 03:50:25.632341 iscsid[643]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 03:50:25.632341 iscsid[643]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 03:50:25.632341 iscsid[643]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 03:50:25.632341 iscsid[643]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 03:50:25.632341 iscsid[643]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 03:50:25.637547 iscsid[643]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 03:50:25.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:25.634205 systemd-networkd[634]: eth0: DHCPv4 address 172.24.4.174/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 03:50:25.634929 systemd[1]: Started iscsid.service. Dec 13 03:50:25.637986 systemd[1]: Starting dracut-initqueue.service... Dec 13 03:50:25.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:25.652348 systemd[1]: Finished dracut-initqueue.service. Dec 13 03:50:25.652970 systemd[1]: Reached target remote-fs-pre.target. Dec 13 03:50:25.653410 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 03:50:25.653842 systemd[1]: Reached target remote-fs.target. Dec 13 03:50:25.657515 systemd[1]: Starting dracut-pre-mount.service... Dec 13 03:50:25.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:25.677248 systemd[1]: Finished dracut-pre-mount.service. Dec 13 03:50:25.765645 ignition[553]: Ignition 2.14.0 Dec 13 03:50:25.766705 ignition[553]: Stage: fetch-offline Dec 13 03:50:25.766859 ignition[553]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:50:25.766903 ignition[553]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 03:50:25.769501 ignition[553]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 03:50:25.769810 ignition[553]: parsed url from cmdline: "" Dec 13 03:50:25.772163 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 03:50:25.769821 ignition[553]: no config URL provided Dec 13 03:50:25.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:25.775002 systemd[1]: Starting ignition-fetch.service... Dec 13 03:50:25.769834 ignition[553]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 03:50:25.769854 ignition[553]: no config at "/usr/lib/ignition/user.ign" Dec 13 03:50:25.769866 ignition[553]: failed to fetch config: resource requires networking Dec 13 03:50:25.770140 ignition[553]: Ignition finished successfully Dec 13 03:50:25.793421 ignition[657]: Ignition 2.14.0 Dec 13 03:50:25.793448 ignition[657]: Stage: fetch Dec 13 03:50:25.793680 ignition[657]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:50:25.793723 ignition[657]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 03:50:25.795985 ignition[657]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 03:50:25.796241 ignition[657]: parsed url from cmdline: "" Dec 13 03:50:25.796251 ignition[657]: no config URL provided Dec 13 03:50:25.796265 ignition[657]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 03:50:25.796284 ignition[657]: no config at "/usr/lib/ignition/user.ign" Dec 13 03:50:25.802253 ignition[657]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 03:50:25.802309 ignition[657]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 03:50:25.802686 ignition[657]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 03:50:26.249776 ignition[657]: GET result: OK Dec 13 03:50:26.249995 ignition[657]: parsing config with SHA512: aa6e0c690d5e22415adcaf75c8650eb3206f3ac8dd8701d5b9d64f32da16c992ad44831fb2eb291ee8db25b57f1bf150d4e655e882fcb594333da10576454638 Dec 13 03:50:26.269504 unknown[657]: fetched base config from "system" Dec 13 03:50:26.269536 unknown[657]: fetched base config from "system" Dec 13 03:50:26.270783 ignition[657]: fetch: fetch complete Dec 13 03:50:26.269551 unknown[657]: fetched user config from "openstack" Dec 13 03:50:26.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:26.270796 ignition[657]: fetch: fetch passed Dec 13 03:50:26.273851 systemd[1]: Finished ignition-fetch.service. Dec 13 03:50:26.270875 ignition[657]: Ignition finished successfully Dec 13 03:50:26.277295 systemd[1]: Starting ignition-kargs.service... Dec 13 03:50:26.298263 ignition[663]: Ignition 2.14.0 Dec 13 03:50:26.298290 ignition[663]: Stage: kargs Dec 13 03:50:26.298526 ignition[663]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:50:26.310463 systemd[1]: Finished ignition-kargs.service. Dec 13 03:50:26.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:26.298568 ignition[663]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 03:50:26.313615 systemd[1]: Starting ignition-disks.service... Dec 13 03:50:26.300812 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 03:50:26.303577 ignition[663]: kargs: kargs passed Dec 13 03:50:26.303682 ignition[663]: Ignition finished successfully Dec 13 03:50:26.330015 ignition[669]: Ignition 2.14.0 Dec 13 03:50:26.330087 ignition[669]: Stage: disks Dec 13 03:50:26.330326 ignition[669]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:50:26.330366 ignition[669]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 03:50:26.332588 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 03:50:26.335464 ignition[669]: disks: disks passed Dec 13 03:50:26.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:26.337375 systemd[1]: Finished ignition-disks.service. Dec 13 03:50:26.335602 ignition[669]: Ignition finished successfully Dec 13 03:50:26.338825 systemd[1]: Reached target initrd-root-device.target. Dec 13 03:50:26.340782 systemd[1]: Reached target local-fs-pre.target. Dec 13 03:50:26.342962 systemd[1]: Reached target local-fs.target. Dec 13 03:50:26.345117 systemd[1]: Reached target sysinit.target. Dec 13 03:50:26.347296 systemd[1]: Reached target basic.target. Dec 13 03:50:26.351004 systemd[1]: Starting systemd-fsck-root.service... Dec 13 03:50:26.384293 systemd-fsck[677]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 03:50:26.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:26.397098 systemd[1]: Finished systemd-fsck-root.service. Dec 13 03:50:26.399625 systemd[1]: Mounting sysroot.mount... Dec 13 03:50:26.417158 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 03:50:26.419464 systemd[1]: Mounted sysroot.mount. Dec 13 03:50:26.421634 systemd[1]: Reached target initrd-root-fs.target. Dec 13 03:50:26.423714 systemd[1]: Mounting sysroot-usr.mount... Dec 13 03:50:26.424584 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 03:50:26.425236 systemd[1]: Starting flatcar-openstack-hostname.service... Dec 13 03:50:26.430279 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 03:50:26.431145 systemd[1]: Reached target ignition-diskful.target. Dec 13 03:50:26.432972 systemd[1]: Mounted sysroot-usr.mount. Dec 13 03:50:26.438389 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 03:50:26.440330 systemd[1]: Starting initrd-setup-root.service... Dec 13 03:50:26.451526 initrd-setup-root[689]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 03:50:26.458358 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (684) Dec 13 03:50:26.462272 initrd-setup-root[697]: cut: /sysroot/etc/group: No such file or directory Dec 13 03:50:26.474123 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 03:50:26.474161 kernel: BTRFS info (device vda6): using free space tree Dec 13 03:50:26.474183 kernel: BTRFS info (device vda6): has skinny extents Dec 13 03:50:26.476293 initrd-setup-root[721]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 03:50:26.486563 initrd-setup-root[731]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 03:50:26.490245 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 03:50:26.582096 systemd[1]: Finished initrd-setup-root.service. Dec 13 03:50:26.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:26.585396 systemd[1]: Starting ignition-mount.service... Dec 13 03:50:26.598346 systemd[1]: Starting sysroot-boot.service... Dec 13 03:50:26.607299 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 03:50:26.607554 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 03:50:26.636915 ignition[752]: INFO : Ignition 2.14.0 Dec 13 03:50:26.636915 ignition[752]: INFO : Stage: mount Dec 13 03:50:26.638208 ignition[752]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:50:26.638208 ignition[752]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 03:50:26.638208 ignition[752]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 03:50:26.640559 ignition[752]: INFO : mount: mount passed Dec 13 03:50:26.640559 ignition[752]: INFO : Ignition finished successfully Dec 13 03:50:26.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:26.639971 systemd[1]: Finished ignition-mount.service. Dec 13 03:50:26.651080 systemd[1]: Finished sysroot-boot.service. Dec 13 03:50:26.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:26.657591 coreos-metadata[683]: Dec 13 03:50:26.657 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 03:50:26.678190 coreos-metadata[683]: Dec 13 03:50:26.678 INFO Fetch successful Dec 13 03:50:26.678917 coreos-metadata[683]: Dec 13 03:50:26.678 INFO wrote hostname ci-3510-3-6-5-5611054123.novalocal to /sysroot/etc/hostname Dec 13 03:50:26.682620 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 03:50:26.682729 systemd[1]: Finished flatcar-openstack-hostname.service. Dec 13 03:50:26.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:26.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:26.684992 systemd[1]: Starting ignition-files.service... Dec 13 03:50:26.692862 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 03:50:26.703078 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (760) Dec 13 03:50:26.706702 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 03:50:26.706749 kernel: BTRFS info (device vda6): using free space tree Dec 13 03:50:26.706770 kernel: BTRFS info (device vda6): has skinny extents Dec 13 03:50:26.719986 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 03:50:26.746929 ignition[779]: INFO : Ignition 2.14.0 Dec 13 03:50:26.746929 ignition[779]: INFO : Stage: files Dec 13 03:50:26.750529 ignition[779]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:50:26.750529 ignition[779]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 03:50:26.750529 ignition[779]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 03:50:26.758548 ignition[779]: DEBUG : files: compiled without relabeling support, skipping Dec 13 03:50:26.760739 ignition[779]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 03:50:26.760739 ignition[779]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 03:50:26.766554 ignition[779]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 03:50:26.768792 ignition[779]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 03:50:26.770485 ignition[779]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 03:50:26.770413 unknown[779]: wrote ssh authorized keys file for user: core Dec 13 03:50:26.773716 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 03:50:26.773716 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 03:50:26.843673 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 03:50:27.158646 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 03:50:27.160061 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 03:50:27.160061 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 03:50:27.460597 systemd-networkd[634]: eth0: Gained IPv6LL Dec 13 03:50:27.817356 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 03:50:28.330221 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 03:50:28.331192 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 03:50:28.332376 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 03:50:28.332376 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 03:50:28.332376 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 03:50:28.332376 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 03:50:28.332376 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 03:50:28.332376 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 03:50:28.332376 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 03:50:28.338903 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 03:50:28.338903 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 03:50:28.338903 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 03:50:28.338903 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 03:50:28.338903 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 03:50:28.338903 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 03:50:28.907393 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 03:50:30.734740 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 03:50:30.737331 ignition[779]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 03:50:30.738694 ignition[779]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 03:50:30.740245 ignition[779]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Dec 13 03:50:30.741730 ignition[779]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 03:50:30.743547 ignition[779]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 03:50:30.743547 ignition[779]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Dec 13 03:50:30.743547 ignition[779]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Dec 13 03:50:30.747943 ignition[779]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 03:50:30.747943 ignition[779]: INFO : files: op(10): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 03:50:30.747943 ignition[779]: INFO : files: op(10): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 03:50:30.760177 ignition[779]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 03:50:30.760177 ignition[779]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 03:50:30.760177 ignition[779]: INFO : files: files passed Dec 13 03:50:30.760177 ignition[779]: INFO : Ignition finished successfully Dec 13 03:50:30.776576 kernel: kauditd_printk_skb: 27 callbacks suppressed Dec 13 03:50:30.776618 kernel: audit: type=1130 audit(1734061830.765:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:30.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:30.761809 systemd[1]: Finished ignition-files.service. Dec 13 03:50:30.768722 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 03:50:30.777959 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 03:50:30.782323 systemd[1]: Starting ignition-quench.service... Dec 13 03:50:30.789426 initrd-setup-root-after-ignition[804]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 03:50:30.789518 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 03:50:30.805178 kernel: audit: type=1130 audit(1734061830.792:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:30.805222 kernel: audit: type=1131 audit(1734061830.792:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:30.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:30.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:30.789739 systemd[1]: Finished ignition-quench.service. Dec 13 03:50:30.810206 kernel: audit: type=1130 audit(1734061830.806:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:30.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:30.792801 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 03:50:30.806377 systemd[1]: Reached target ignition-complete.target. Dec 13 03:50:30.812448 systemd[1]: Starting initrd-parse-etc.service... Dec 13 03:50:30.831176 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 03:50:30.832730 systemd[1]: Finished initrd-parse-etc.service. Dec 13 03:50:30.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:30.835106 systemd[1]: Reached target initrd-fs.target. Dec 13 03:50:30.851291 kernel: audit: type=1130 audit(1734061830.834:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:30.851351 kernel: audit: type=1131 audit(1734061830.834:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:30.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:30.852589 systemd[1]: Reached target initrd.target. Dec 13 03:50:30.854981 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 03:50:30.858560 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 03:50:30.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:30.883309 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 03:50:30.894804 kernel: audit: type=1130 audit(1734061830.884:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:30.885940 systemd[1]: Starting initrd-cleanup.service... Dec 13 03:50:30.915555 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 03:50:30.915759 systemd[1]: Finished initrd-cleanup.service. Dec 13 03:50:30.934925 kernel: audit: type=1130 audit(1734061830.917:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:30.934972 kernel: audit: type=1131 audit(1734061830.917:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:30.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:30.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:30.919114 systemd[1]: Stopped target nss-lookup.target. Dec 13 03:50:30.935685 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 03:50:30.937619 systemd[1]: Stopped target timers.target. Dec 13 03:50:30.939539 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 03:50:30.950766 kernel: audit: type=1131 audit(1734061830.941:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:30.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:30.939615 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 03:50:30.941507 systemd[1]: Stopped target initrd.target. Dec 13 03:50:30.951530 systemd[1]: Stopped target basic.target. Dec 13 03:50:30.953385 systemd[1]: Stopped target ignition-complete.target. Dec 13 03:50:30.955255 systemd[1]: Stopped target ignition-diskful.target. Dec 13 03:50:30.957333 systemd[1]: Stopped target initrd-root-device.target. Dec 13 03:50:30.959254 systemd[1]: Stopped target remote-fs.target. Dec 13 03:50:30.961134 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 03:50:30.963085 systemd[1]: Stopped target sysinit.target. Dec 13 03:50:30.964886 systemd[1]: Stopped target local-fs.target. Dec 13 03:50:30.966759 systemd[1]: Stopped target local-fs-pre.target. Dec 13 03:50:30.968593 systemd[1]: Stopped target swap.target. Dec 13 03:50:30.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:30.970431 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 03:50:30.970504 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 03:50:30.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:30.972381 systemd[1]: Stopped target cryptsetup.target. Dec 13 03:50:30.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:30.974152 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 03:50:30.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:30.974221 systemd[1]: Stopped dracut-initqueue.service. Dec 13 03:50:30.976250 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 03:50:30.976315 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 03:50:30.978125 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 03:50:30.978189 systemd[1]: Stopped ignition-files.service. Dec 13 03:50:30.980969 systemd[1]: Stopping ignition-mount.service... Dec 13 03:50:30.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:30.987544 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 03:50:30.987630 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 03:50:30.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:30.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:30.988950 systemd[1]: Stopping sysroot-boot.service... Dec 13 03:50:30.989433 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 03:50:30.989487 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 03:50:31.000102 ignition[817]: INFO : Ignition 2.14.0 Dec 13 03:50:31.000102 ignition[817]: INFO : Stage: umount Dec 13 03:50:31.000102 ignition[817]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:50:31.000102 ignition[817]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 03:50:31.000102 ignition[817]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 03:50:31.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:31.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:31.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:31.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:31.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:31.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:30.989953 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 03:50:31.016441 ignition[817]: INFO : umount: umount passed Dec 13 03:50:31.016441 ignition[817]: INFO : Ignition finished successfully Dec 13 03:50:31.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:31.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:31.020000 audit: BPF prog-id=6 op=UNLOAD Dec 13 03:50:30.989993 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 03:50:31.004973 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 03:50:31.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:31.005091 systemd[1]: Stopped ignition-mount.service. Dec 13 03:50:31.005721 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 03:50:31.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:31.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:31.005769 systemd[1]: Stopped ignition-disks.service. Dec 13 03:50:31.006241 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 03:50:31.006277 systemd[1]: Stopped ignition-kargs.service. Dec 13 03:50:31.006745 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 03:50:31.006784 systemd[1]: Stopped ignition-fetch.service. Dec 13 03:50:31.007252 systemd[1]: Stopped target network.target. Dec 13 03:50:31.007656 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 03:50:31.007694 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 03:50:31.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:31.008140 systemd[1]: Stopped target paths.target. Dec 13 03:50:31.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:31.008613 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 03:50:31.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:31.008657 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 03:50:31.009117 systemd[1]: Stopped target slices.target. Dec 13 03:50:31.009520 systemd[1]: Stopped target sockets.target. Dec 13 03:50:31.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:31.009925 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 03:50:31.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:31.009951 systemd[1]: Closed iscsid.socket. Dec 13 03:50:31.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:31.010391 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 03:50:31.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:31.010414 systemd[1]: Closed iscsiuio.socket. Dec 13 03:50:31.010795 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 03:50:31.010830 systemd[1]: Stopped ignition-setup.service. Dec 13 03:50:31.011390 systemd[1]: Stopping systemd-networkd.service... Dec 13 03:50:31.011942 systemd[1]: Stopping systemd-resolved.service... Dec 13 03:50:31.014651 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 03:50:31.014740 systemd[1]: Stopped systemd-resolved.service. Dec 13 03:50:31.015150 systemd-networkd[634]: eth0: DHCPv6 lease lost Dec 13 03:50:31.047000 audit: BPF prog-id=9 op=UNLOAD Dec 13 03:50:31.017767 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 03:50:31.017859 systemd[1]: Stopped systemd-networkd.service. Dec 13 03:50:31.018986 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 03:50:31.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:31.019023 systemd[1]: Closed systemd-networkd.socket. Dec 13 03:50:31.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:31.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:31.021159 systemd[1]: Stopping network-cleanup.service... Dec 13 03:50:31.022352 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 03:50:31.022398 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 03:50:31.023217 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 03:50:31.023256 systemd[1]: Stopped systemd-sysctl.service. Dec 13 03:50:31.025100 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 03:50:31.025147 systemd[1]: Stopped systemd-modules-load.service. Dec 13 03:50:31.026259 systemd[1]: Stopping systemd-udevd.service... Dec 13 03:50:31.029320 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 03:50:31.029410 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 03:50:31.031609 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 03:50:31.031746 systemd[1]: Stopped systemd-udevd.service. Dec 13 03:50:31.034100 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 03:50:31.034171 systemd[1]: Stopped sysroot-boot.service. Dec 13 03:50:31.035056 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 03:50:31.035140 systemd[1]: Stopped network-cleanup.service. Dec 13 03:50:31.035854 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 03:50:31.035888 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 03:50:31.036735 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 03:50:31.036764 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 03:50:31.037560 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 03:50:31.037596 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 03:50:31.038407 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 03:50:31.038441 systemd[1]: Stopped dracut-cmdline.service. Dec 13 03:50:31.039222 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 03:50:31.039257 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 03:50:31.040045 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 03:50:31.040081 systemd[1]: Stopped initrd-setup-root.service. Dec 13 03:50:31.041448 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 03:50:31.042170 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 03:50:31.042217 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 03:50:31.050131 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 03:50:31.050221 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 03:50:31.050768 systemd[1]: Reached target initrd-switch-root.target. Dec 13 03:50:31.052602 systemd[1]: Starting initrd-switch-root.service... Dec 13 03:50:31.071768 systemd[1]: Switching root. Dec 13 03:50:31.091599 iscsid[643]: iscsid shutting down. Dec 13 03:50:31.092157 systemd-journald[185]: Received SIGTERM from PID 1 (n/a). Dec 13 03:50:31.092208 systemd-journald[185]: Journal stopped Dec 13 03:50:35.472911 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 03:50:35.472963 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 03:50:35.472977 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 03:50:35.472992 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 03:50:35.473004 kernel: SELinux: policy capability open_perms=1 Dec 13 03:50:35.473018 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 03:50:35.473524 kernel: SELinux: policy capability always_check_network=0 Dec 13 03:50:35.473548 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 03:50:35.473560 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 03:50:35.473572 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 03:50:35.473592 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 03:50:35.473606 systemd[1]: Successfully loaded SELinux policy in 92.721ms. Dec 13 03:50:35.473627 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.078ms. Dec 13 03:50:35.473642 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 03:50:35.473661 systemd[1]: Detected virtualization kvm. Dec 13 03:50:35.473674 systemd[1]: Detected architecture x86-64. Dec 13 03:50:35.473686 systemd[1]: Detected first boot. Dec 13 03:50:35.473699 systemd[1]: Hostname set to . Dec 13 03:50:35.473711 systemd[1]: Initializing machine ID from VM UUID. Dec 13 03:50:35.473726 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 03:50:35.473738 systemd[1]: Populated /etc with preset unit settings. Dec 13 03:50:35.473751 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 03:50:35.473764 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 03:50:35.473782 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 03:50:35.473795 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 03:50:35.473810 systemd[1]: Stopped iscsiuio.service. Dec 13 03:50:35.473823 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 03:50:35.473835 systemd[1]: Stopped iscsid.service. Dec 13 03:50:35.473847 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 03:50:35.473859 systemd[1]: Stopped initrd-switch-root.service. Dec 13 03:50:35.473871 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 03:50:35.473884 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 03:50:35.473897 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 03:50:35.473911 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 03:50:35.473923 systemd[1]: Created slice system-getty.slice. Dec 13 03:50:35.473935 systemd[1]: Created slice system-modprobe.slice. Dec 13 03:50:35.473949 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 03:50:35.473961 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 03:50:35.473974 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 03:50:35.473988 systemd[1]: Created slice user.slice. Dec 13 03:50:35.474001 systemd[1]: Started systemd-ask-password-console.path. Dec 13 03:50:35.474014 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 03:50:35.474025 systemd[1]: Set up automount boot.automount. Dec 13 03:50:35.474056 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 03:50:35.474069 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 03:50:35.474081 systemd[1]: Stopped target initrd-fs.target. Dec 13 03:50:35.474093 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 03:50:35.474105 systemd[1]: Reached target integritysetup.target. Dec 13 03:50:35.474119 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 03:50:35.474131 systemd[1]: Reached target remote-fs.target. Dec 13 03:50:35.474142 systemd[1]: Reached target slices.target. Dec 13 03:50:35.474154 systemd[1]: Reached target swap.target. Dec 13 03:50:35.474166 systemd[1]: Reached target torcx.target. Dec 13 03:50:35.474178 systemd[1]: Reached target veritysetup.target. Dec 13 03:50:35.474189 systemd[1]: Listening on systemd-coredump.socket. Dec 13 03:50:35.474201 systemd[1]: Listening on systemd-initctl.socket. Dec 13 03:50:35.474213 systemd[1]: Listening on systemd-networkd.socket. Dec 13 03:50:35.474224 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 03:50:35.474238 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 03:50:35.474250 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 03:50:35.474261 systemd[1]: Mounting dev-hugepages.mount... Dec 13 03:50:35.474276 systemd[1]: Mounting dev-mqueue.mount... Dec 13 03:50:35.474287 systemd[1]: Mounting media.mount... Dec 13 03:50:35.474299 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:50:35.474311 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 03:50:35.474322 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 03:50:35.474334 systemd[1]: Mounting tmp.mount... Dec 13 03:50:35.474347 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 03:50:35.474359 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 03:50:35.474371 systemd[1]: Starting kmod-static-nodes.service... Dec 13 03:50:35.474382 systemd[1]: Starting modprobe@configfs.service... Dec 13 03:50:35.474394 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 03:50:35.474408 systemd[1]: Starting modprobe@drm.service... Dec 13 03:50:35.474419 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 03:50:35.474430 systemd[1]: Starting modprobe@fuse.service... Dec 13 03:50:35.474442 systemd[1]: Starting modprobe@loop.service... Dec 13 03:50:35.474455 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 03:50:35.474467 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 03:50:35.474478 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 03:50:35.474489 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 03:50:35.474501 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 03:50:35.474513 systemd[1]: Stopped systemd-journald.service. Dec 13 03:50:35.474525 systemd[1]: Starting systemd-journald.service... Dec 13 03:50:35.474536 kernel: loop: module loaded Dec 13 03:50:35.474547 systemd[1]: Starting systemd-modules-load.service... Dec 13 03:50:35.474560 systemd[1]: Starting systemd-network-generator.service... Dec 13 03:50:35.474572 systemd[1]: Starting systemd-remount-fs.service... Dec 13 03:50:35.474583 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 03:50:35.474595 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 03:50:35.474606 systemd[1]: Stopped verity-setup.service. Dec 13 03:50:35.474618 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:50:35.474630 systemd[1]: Mounted dev-hugepages.mount. Dec 13 03:50:35.474641 systemd[1]: Mounted dev-mqueue.mount. Dec 13 03:50:35.474653 systemd[1]: Mounted media.mount. Dec 13 03:50:35.474666 kernel: fuse: init (API version 7.34) Dec 13 03:50:35.474677 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 03:50:35.474689 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 03:50:35.474700 systemd[1]: Mounted tmp.mount. Dec 13 03:50:35.474712 systemd[1]: Finished kmod-static-nodes.service. Dec 13 03:50:35.474725 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 03:50:35.474736 systemd[1]: Finished modprobe@configfs.service. Dec 13 03:50:35.474749 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 03:50:35.474760 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 03:50:35.474772 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 03:50:35.474783 systemd[1]: Finished modprobe@drm.service. Dec 13 03:50:35.474795 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 03:50:35.474806 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 03:50:35.474817 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 03:50:35.474831 systemd[1]: Finished modprobe@fuse.service. Dec 13 03:50:35.474843 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 03:50:35.474854 systemd[1]: Finished modprobe@loop.service. Dec 13 03:50:35.474868 systemd-journald[931]: Journal started Dec 13 03:50:35.474911 systemd-journald[931]: Runtime Journal (/run/log/journal/9be88b6daeca4c6d9ec0e91a4bfefd38) is 4.9M, max 39.5M, 34.5M free. Dec 13 03:50:31.385000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 03:50:31.501000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 03:50:35.478839 systemd[1]: Started systemd-journald.service. Dec 13 03:50:31.501000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 03:50:31.502000 audit: BPF prog-id=10 op=LOAD Dec 13 03:50:31.502000 audit: BPF prog-id=10 op=UNLOAD Dec 13 03:50:31.502000 audit: BPF prog-id=11 op=LOAD Dec 13 03:50:31.502000 audit: BPF prog-id=11 op=UNLOAD Dec 13 03:50:31.671000 audit[849]: AVC avc: denied { associate } for pid=849 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 03:50:31.671000 audit[849]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=832 pid=849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:50:31.671000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 03:50:31.675000 audit[849]: AVC avc: denied { associate } for pid=849 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 03:50:31.675000 audit[849]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179a9 a2=1ed a3=0 items=2 ppid=832 pid=849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:50:31.675000 audit: CWD cwd="/" Dec 13 03:50:31.675000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:31.675000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:31.675000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 03:50:35.240000 audit: BPF prog-id=12 op=LOAD Dec 13 03:50:35.240000 audit: BPF prog-id=3 op=UNLOAD Dec 13 03:50:35.240000 audit: BPF prog-id=13 op=LOAD Dec 13 03:50:35.240000 audit: BPF prog-id=14 op=LOAD Dec 13 03:50:35.240000 audit: BPF prog-id=4 op=UNLOAD Dec 13 03:50:35.240000 audit: BPF prog-id=5 op=UNLOAD Dec 13 03:50:35.241000 audit: BPF prog-id=15 op=LOAD Dec 13 03:50:35.242000 audit: BPF prog-id=12 op=UNLOAD Dec 13 03:50:35.242000 audit: BPF prog-id=16 op=LOAD Dec 13 03:50:35.242000 audit: BPF prog-id=17 op=LOAD Dec 13 03:50:35.242000 audit: BPF prog-id=13 op=UNLOAD Dec 13 03:50:35.242000 audit: BPF prog-id=14 op=UNLOAD Dec 13 03:50:35.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.252000 audit: BPF prog-id=15 op=UNLOAD Dec 13 03:50:35.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.406000 audit: BPF prog-id=18 op=LOAD Dec 13 03:50:35.406000 audit: BPF prog-id=19 op=LOAD Dec 13 03:50:35.407000 audit: BPF prog-id=20 op=LOAD Dec 13 03:50:35.407000 audit: BPF prog-id=16 op=UNLOAD Dec 13 03:50:35.407000 audit: BPF prog-id=17 op=UNLOAD Dec 13 03:50:35.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.469000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 03:50:35.469000 audit[931]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffdf4e6b720 a2=4000 a3=7ffdf4e6b7bc items=0 ppid=1 pid=931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:50:35.469000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 03:50:35.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.238483 systemd[1]: Queued start job for default target multi-user.target. Dec 13 03:50:31.664248 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T03:50:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 03:50:35.238497 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 03:50:31.665240 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T03:50:31Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 03:50:35.243418 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 03:50:31.665290 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T03:50:31Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 03:50:35.476954 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 03:50:31.665387 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T03:50:31Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 03:50:35.477678 systemd[1]: Finished systemd-modules-load.service. Dec 13 03:50:31.665414 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T03:50:31Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 03:50:35.478417 systemd[1]: Finished systemd-network-generator.service. Dec 13 03:50:31.665486 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T03:50:31Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 03:50:35.479508 systemd[1]: Finished systemd-remount-fs.service. Dec 13 03:50:31.665520 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T03:50:31Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 03:50:35.481415 systemd[1]: Reached target network-pre.target. Dec 13 03:50:31.665938 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T03:50:31Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 03:50:35.483639 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 03:50:31.666027 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T03:50:31Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 03:50:31.666110 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T03:50:31Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 03:50:31.668859 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T03:50:31Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 03:50:31.668948 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T03:50:31Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 03:50:35.489205 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 03:50:31.668996 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T03:50:31Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 03:50:35.489674 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 03:50:31.669077 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T03:50:31Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 03:50:31.669127 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T03:50:31Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 03:50:31.669165 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T03:50:31Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 03:50:34.816161 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T03:50:34Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 03:50:34.816707 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T03:50:34Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 03:50:34.816951 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T03:50:34Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 03:50:34.817454 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T03:50:34Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 03:50:34.817583 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T03:50:34Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 03:50:35.492155 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 03:50:34.817732 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T03:50:34Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 03:50:35.493578 systemd[1]: Starting systemd-journal-flush.service... Dec 13 03:50:35.494101 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 03:50:35.495460 systemd[1]: Starting systemd-random-seed.service... Dec 13 03:50:35.496021 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 03:50:35.497287 systemd[1]: Starting systemd-sysctl.service... Dec 13 03:50:35.499569 systemd[1]: Starting systemd-sysusers.service... Dec 13 03:50:35.506660 systemd-journald[931]: Time spent on flushing to /var/log/journal/9be88b6daeca4c6d9ec0e91a4bfefd38 is 39.557ms for 1095 entries. Dec 13 03:50:35.506660 systemd-journald[931]: System Journal (/var/log/journal/9be88b6daeca4c6d9ec0e91a4bfefd38) is 8.0M, max 584.8M, 576.8M free. Dec 13 03:50:35.567208 systemd-journald[931]: Received client request to flush runtime journal. Dec 13 03:50:35.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.504874 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 03:50:35.506450 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 03:50:35.568183 udevadm[958]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 03:50:35.527391 systemd[1]: Finished systemd-random-seed.service. Dec 13 03:50:35.528011 systemd[1]: Reached target first-boot-complete.target. Dec 13 03:50:35.537325 systemd[1]: Finished systemd-sysctl.service. Dec 13 03:50:35.542906 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 03:50:35.544604 systemd[1]: Starting systemd-udev-settle.service... Dec 13 03:50:35.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:35.560399 systemd[1]: Finished systemd-sysusers.service. Dec 13 03:50:35.567966 systemd[1]: Finished systemd-journal-flush.service. Dec 13 03:50:36.129272 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 03:50:36.152362 kernel: kauditd_printk_skb: 100 callbacks suppressed Dec 13 03:50:36.152622 kernel: audit: type=1130 audit(1734061836.130:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:36.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:36.133000 audit: BPF prog-id=21 op=LOAD Dec 13 03:50:36.154078 systemd[1]: Starting systemd-udevd.service... Dec 13 03:50:36.156125 kernel: audit: type=1334 audit(1734061836.133:140): prog-id=21 op=LOAD Dec 13 03:50:36.152000 audit: BPF prog-id=22 op=LOAD Dec 13 03:50:36.152000 audit: BPF prog-id=7 op=UNLOAD Dec 13 03:50:36.152000 audit: BPF prog-id=8 op=UNLOAD Dec 13 03:50:36.159807 kernel: audit: type=1334 audit(1734061836.152:141): prog-id=22 op=LOAD Dec 13 03:50:36.159885 kernel: audit: type=1334 audit(1734061836.152:142): prog-id=7 op=UNLOAD Dec 13 03:50:36.159925 kernel: audit: type=1334 audit(1734061836.152:143): prog-id=8 op=UNLOAD Dec 13 03:50:36.203550 systemd-udevd[960]: Using default interface naming scheme 'v252'. Dec 13 03:50:36.243427 systemd[1]: Started systemd-udevd.service. Dec 13 03:50:36.261632 kernel: audit: type=1130 audit(1734061836.248:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:36.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:36.271911 kernel: audit: type=1334 audit(1734061836.265:145): prog-id=23 op=LOAD Dec 13 03:50:36.265000 audit: BPF prog-id=23 op=LOAD Dec 13 03:50:36.270671 systemd[1]: Starting systemd-networkd.service... Dec 13 03:50:36.299750 kernel: audit: type=1334 audit(1734061836.290:146): prog-id=24 op=LOAD Dec 13 03:50:36.299977 kernel: audit: type=1334 audit(1734061836.291:147): prog-id=25 op=LOAD Dec 13 03:50:36.290000 audit: BPF prog-id=24 op=LOAD Dec 13 03:50:36.303360 kernel: audit: type=1334 audit(1734061836.291:148): prog-id=26 op=LOAD Dec 13 03:50:36.291000 audit: BPF prog-id=25 op=LOAD Dec 13 03:50:36.291000 audit: BPF prog-id=26 op=LOAD Dec 13 03:50:36.304600 systemd[1]: Starting systemd-userdbd.service... Dec 13 03:50:36.307841 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 03:50:36.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:36.352193 systemd[1]: Started systemd-userdbd.service. Dec 13 03:50:36.372077 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 03:50:36.384283 kernel: ACPI: button: Power Button [PWRF] Dec 13 03:50:36.430000 audit[975]: AVC avc: denied { confidentiality } for pid=975 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 03:50:36.430000 audit[975]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55e2013413b0 a1=337fc a2=7eff83eeebc5 a3=5 items=110 ppid=960 pid=975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:50:36.451415 systemd-networkd[976]: lo: Link UP Dec 13 03:50:36.451421 systemd-networkd[976]: lo: Gained carrier Dec 13 03:50:36.430000 audit: CWD cwd="/" Dec 13 03:50:36.430000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=1 name=(null) inode=13685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=2 name=(null) inode=13685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.452010 systemd-networkd[976]: Enumeration completed Dec 13 03:50:36.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:36.452158 systemd[1]: Started systemd-networkd.service. Dec 13 03:50:36.452216 systemd-networkd[976]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 03:50:36.430000 audit: PATH item=3 name=(null) inode=13686 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=4 name=(null) inode=13685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=5 name=(null) inode=13687 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=6 name=(null) inode=13685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=7 name=(null) inode=13688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=8 name=(null) inode=13688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=9 name=(null) inode=13689 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=10 name=(null) inode=13688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=11 name=(null) inode=13690 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=12 name=(null) inode=13688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=13 name=(null) inode=13691 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=14 name=(null) inode=13688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=15 name=(null) inode=13692 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=16 name=(null) inode=13688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=17 name=(null) inode=13693 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=18 name=(null) inode=13685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=19 name=(null) inode=13694 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=20 name=(null) inode=13694 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=21 name=(null) inode=13695 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=22 name=(null) inode=13694 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=23 name=(null) inode=13696 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=24 name=(null) inode=13694 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=25 name=(null) inode=13697 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=26 name=(null) inode=13694 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=27 name=(null) inode=13698 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=28 name=(null) inode=13694 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=29 name=(null) inode=13699 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=30 name=(null) inode=13685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=31 name=(null) inode=13700 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=32 name=(null) inode=13700 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=33 name=(null) inode=13701 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=34 name=(null) inode=13700 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=35 name=(null) inode=13702 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.455913 systemd-networkd[976]: eth0: Link UP Dec 13 03:50:36.455920 systemd-networkd[976]: eth0: Gained carrier Dec 13 03:50:36.430000 audit: PATH item=36 name=(null) inode=13700 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=37 name=(null) inode=13703 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=38 name=(null) inode=13700 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=39 name=(null) inode=13704 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=40 name=(null) inode=13700 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=41 name=(null) inode=13705 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=42 name=(null) inode=13685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=43 name=(null) inode=13706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=44 name=(null) inode=13706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=45 name=(null) inode=13707 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=46 name=(null) inode=13706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=47 name=(null) inode=13708 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=48 name=(null) inode=13706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=49 name=(null) inode=13709 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=50 name=(null) inode=13706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=51 name=(null) inode=13710 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=52 name=(null) inode=13706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=53 name=(null) inode=13711 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=55 name=(null) inode=13712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=56 name=(null) inode=13712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=57 name=(null) inode=13713 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=58 name=(null) inode=13712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=59 name=(null) inode=13714 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=60 name=(null) inode=13712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=61 name=(null) inode=13715 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=62 name=(null) inode=13715 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=63 name=(null) inode=13716 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=64 name=(null) inode=13715 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=65 name=(null) inode=13717 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=66 name=(null) inode=13715 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=67 name=(null) inode=13718 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=68 name=(null) inode=13715 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=69 name=(null) inode=13719 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=70 name=(null) inode=13715 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=71 name=(null) inode=13720 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=72 name=(null) inode=13712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=73 name=(null) inode=13721 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=74 name=(null) inode=13721 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=75 name=(null) inode=13722 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=76 name=(null) inode=13721 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=77 name=(null) inode=13723 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=78 name=(null) inode=13721 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=79 name=(null) inode=13724 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=80 name=(null) inode=13721 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=81 name=(null) inode=13725 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=82 name=(null) inode=13721 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=83 name=(null) inode=13726 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=84 name=(null) inode=13712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=85 name=(null) inode=13727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=86 name=(null) inode=13727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=87 name=(null) inode=13728 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=88 name=(null) inode=13727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=89 name=(null) inode=13729 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=90 name=(null) inode=13727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=91 name=(null) inode=13730 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=92 name=(null) inode=13727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=93 name=(null) inode=13731 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=94 name=(null) inode=13727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=95 name=(null) inode=13732 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=96 name=(null) inode=13712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=97 name=(null) inode=13733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=98 name=(null) inode=13733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=99 name=(null) inode=13734 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=100 name=(null) inode=13733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=101 name=(null) inode=13735 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=102 name=(null) inode=13733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=103 name=(null) inode=13736 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=104 name=(null) inode=13733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=105 name=(null) inode=13737 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=106 name=(null) inode=13733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=107 name=(null) inode=13738 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PATH item=109 name=(null) inode=13739 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:50:36.430000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 03:50:36.468068 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 13 03:50:36.469397 systemd-networkd[976]: eth0: DHCPv4 address 172.24.4.174/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 03:50:36.475327 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 03:50:36.484098 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 03:50:36.490061 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 03:50:36.533445 systemd[1]: Finished systemd-udev-settle.service. Dec 13 03:50:36.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:36.535197 systemd[1]: Starting lvm2-activation-early.service... Dec 13 03:50:36.563843 lvm[989]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 03:50:36.591281 systemd[1]: Finished lvm2-activation-early.service. Dec 13 03:50:36.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:36.592706 systemd[1]: Reached target cryptsetup.target. Dec 13 03:50:36.596276 systemd[1]: Starting lvm2-activation.service... Dec 13 03:50:36.600446 lvm[990]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 03:50:36.628078 systemd[1]: Finished lvm2-activation.service. Dec 13 03:50:36.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:36.629490 systemd[1]: Reached target local-fs-pre.target. Dec 13 03:50:36.630623 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 03:50:36.630685 systemd[1]: Reached target local-fs.target. Dec 13 03:50:36.631781 systemd[1]: Reached target machines.target. Dec 13 03:50:36.635556 systemd[1]: Starting ldconfig.service... Dec 13 03:50:36.637963 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 03:50:36.638192 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:50:36.640715 systemd[1]: Starting systemd-boot-update.service... Dec 13 03:50:36.645740 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 03:50:36.649587 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 03:50:36.655238 systemd[1]: Starting systemd-sysext.service... Dec 13 03:50:36.663653 systemd[1]: boot.automount: Got automount request for /boot, triggered by 992 (bootctl) Dec 13 03:50:36.666530 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 03:50:36.699878 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 03:50:36.724639 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 03:50:36.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:36.783221 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 03:50:36.783625 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 03:50:36.921193 kernel: loop0: detected capacity change from 0 to 210664 Dec 13 03:50:37.365188 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 03:50:37.367582 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 03:50:37.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:37.415131 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 03:50:37.448104 kernel: loop1: detected capacity change from 0 to 210664 Dec 13 03:50:37.490310 (sd-sysext)[1005]: Using extensions 'kubernetes'. Dec 13 03:50:37.493407 (sd-sysext)[1005]: Merged extensions into '/usr'. Dec 13 03:50:37.537853 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:50:37.541022 systemd[1]: Mounting usr-share-oem.mount... Dec 13 03:50:37.542561 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 03:50:37.549487 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 03:50:37.559872 systemd-fsck[1002]: fsck.fat 4.2 (2021-01-31) Dec 13 03:50:37.559872 systemd-fsck[1002]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 03:50:37.556446 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 03:50:37.558952 systemd[1]: Starting modprobe@loop.service... Dec 13 03:50:37.559461 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 03:50:37.559591 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:50:37.559732 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:50:37.564610 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 03:50:37.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:37.566232 systemd[1]: Mounted usr-share-oem.mount. Dec 13 03:50:37.567498 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 03:50:37.567726 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 03:50:37.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:37.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:37.568996 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 03:50:37.569310 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 03:50:37.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:37.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:37.570780 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 03:50:37.570914 systemd[1]: Finished modprobe@loop.service. Dec 13 03:50:37.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:37.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:37.573576 systemd[1]: Finished systemd-sysext.service. Dec 13 03:50:37.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:37.575612 systemd[1]: Mounting boot.mount... Dec 13 03:50:37.577229 systemd[1]: Starting ensure-sysext.service... Dec 13 03:50:37.580895 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 03:50:37.580967 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 03:50:37.584393 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 03:50:37.592455 systemd[1]: Mounted boot.mount. Dec 13 03:50:37.594292 systemd[1]: Reloading. Dec 13 03:50:37.633093 systemd-tmpfiles[1013]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 03:50:37.654475 systemd-tmpfiles[1013]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 03:50:37.679844 systemd-tmpfiles[1013]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 03:50:37.688769 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T03:50:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 03:50:37.689183 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T03:50:37Z" level=info msg="torcx already run" Dec 13 03:50:37.764371 systemd-networkd[976]: eth0: Gained IPv6LL Dec 13 03:50:37.820412 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 03:50:37.821070 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 03:50:37.854946 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 03:50:37.953197 ldconfig[991]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 03:50:37.953000 audit: BPF prog-id=27 op=LOAD Dec 13 03:50:37.953000 audit: BPF prog-id=24 op=UNLOAD Dec 13 03:50:37.953000 audit: BPF prog-id=28 op=LOAD Dec 13 03:50:37.953000 audit: BPF prog-id=29 op=LOAD Dec 13 03:50:37.953000 audit: BPF prog-id=25 op=UNLOAD Dec 13 03:50:37.953000 audit: BPF prog-id=26 op=UNLOAD Dec 13 03:50:37.954000 audit: BPF prog-id=30 op=LOAD Dec 13 03:50:37.954000 audit: BPF prog-id=23 op=UNLOAD Dec 13 03:50:37.956000 audit: BPF prog-id=31 op=LOAD Dec 13 03:50:37.956000 audit: BPF prog-id=32 op=LOAD Dec 13 03:50:37.956000 audit: BPF prog-id=21 op=UNLOAD Dec 13 03:50:37.956000 audit: BPF prog-id=22 op=UNLOAD Dec 13 03:50:37.958000 audit: BPF prog-id=33 op=LOAD Dec 13 03:50:37.958000 audit: BPF prog-id=18 op=UNLOAD Dec 13 03:50:37.958000 audit: BPF prog-id=34 op=LOAD Dec 13 03:50:37.958000 audit: BPF prog-id=35 op=LOAD Dec 13 03:50:37.958000 audit: BPF prog-id=19 op=UNLOAD Dec 13 03:50:37.958000 audit: BPF prog-id=20 op=UNLOAD Dec 13 03:50:37.963379 systemd[1]: Finished systemd-boot-update.service. Dec 13 03:50:37.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:37.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:37.965515 systemd[1]: Finished ldconfig.service. Dec 13 03:50:37.966366 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 03:50:37.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:37.970191 systemd[1]: Starting audit-rules.service... Dec 13 03:50:37.971704 systemd[1]: Starting clean-ca-certificates.service... Dec 13 03:50:37.973914 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 03:50:37.979000 audit: BPF prog-id=36 op=LOAD Dec 13 03:50:37.982195 systemd[1]: Starting systemd-resolved.service... Dec 13 03:50:37.983000 audit: BPF prog-id=37 op=LOAD Dec 13 03:50:37.984301 systemd[1]: Starting systemd-timesyncd.service... Dec 13 03:50:37.986365 systemd[1]: Starting systemd-update-utmp.service... Dec 13 03:50:37.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:37.988174 systemd[1]: Finished clean-ca-certificates.service. Dec 13 03:50:37.991867 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 03:50:37.994831 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:50:37.995086 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 03:50:37.997439 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 03:50:37.999125 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 03:50:38.001290 systemd[1]: Starting modprobe@loop.service... Dec 13 03:50:38.002158 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 03:50:38.002291 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:50:38.002425 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 03:50:38.002526 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:50:38.002000 audit[1086]: SYSTEM_BOOT pid=1086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 03:50:38.003518 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 03:50:38.003658 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 03:50:38.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:38.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:38.005631 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 03:50:38.005753 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 03:50:38.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:38.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:38.009857 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 03:50:38.009977 systemd[1]: Finished modprobe@loop.service. Dec 13 03:50:38.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:38.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:38.010832 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 03:50:38.011009 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 03:50:38.014678 systemd[1]: Finished systemd-update-utmp.service. Dec 13 03:50:38.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:38.016344 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:50:38.016619 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 03:50:38.019280 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 03:50:38.021361 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 03:50:38.023491 systemd[1]: Starting modprobe@loop.service... Dec 13 03:50:38.024870 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 03:50:38.025074 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:50:38.025237 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 03:50:38.025351 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:50:38.026476 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 03:50:38.026888 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 03:50:38.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:38.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:38.031764 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:50:38.032012 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 03:50:38.034025 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 03:50:38.036716 systemd[1]: Starting modprobe@drm.service... Dec 13 03:50:38.037334 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 03:50:38.037536 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:50:38.038819 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 03:50:38.039916 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 03:50:38.040078 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:50:38.041292 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 03:50:38.041445 systemd[1]: Finished modprobe@loop.service. Dec 13 03:50:38.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:38.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:38.043608 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 03:50:38.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:38.044489 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 03:50:38.044604 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 03:50:38.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:38.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:38.045422 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 03:50:38.047081 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 03:50:38.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:38.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:38.047969 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 03:50:38.048119 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 03:50:38.049770 systemd[1]: Starting systemd-update-done.service... Dec 13 03:50:38.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:38.052243 systemd[1]: Finished ensure-sysext.service. Dec 13 03:50:38.057112 systemd[1]: Finished systemd-update-done.service. Dec 13 03:50:38.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:38.058906 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 03:50:38.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:38.060347 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 03:50:38.060471 systemd[1]: Finished modprobe@drm.service. Dec 13 03:50:38.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:38.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:50:38.096000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 03:50:38.096000 audit[1110]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc3256a6d0 a2=420 a3=0 items=0 ppid=1080 pid=1110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:50:38.096000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 03:50:38.097347 augenrules[1110]: No rules Dec 13 03:50:38.097606 systemd[1]: Finished audit-rules.service. Dec 13 03:50:38.105761 systemd[1]: Started systemd-timesyncd.service. Dec 13 03:50:38.106385 systemd[1]: Reached target time-set.target. Dec 13 03:50:38.117083 systemd-resolved[1084]: Positive Trust Anchors: Dec 13 03:50:38.117409 systemd-resolved[1084]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 03:50:38.117510 systemd-resolved[1084]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 03:50:38.124774 systemd-resolved[1084]: Using system hostname 'ci-3510-3-6-5-5611054123.novalocal'. Dec 13 03:50:38.126425 systemd[1]: Started systemd-resolved.service. Dec 13 03:50:38.126890 systemd[1]: Reached target network.target. Dec 13 03:50:38.127312 systemd[1]: Reached target network-online.target. Dec 13 03:50:38.127735 systemd[1]: Reached target nss-lookup.target. Dec 13 03:50:38.128176 systemd[1]: Reached target sysinit.target. Dec 13 03:50:38.128687 systemd[1]: Started motdgen.path. Dec 13 03:50:38.129143 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 03:50:38.129849 systemd[1]: Started logrotate.timer. Dec 13 03:50:38.130361 systemd[1]: Started mdadm.timer. Dec 13 03:50:38.130744 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 03:50:38.131180 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 03:50:38.131210 systemd[1]: Reached target paths.target. Dec 13 03:50:38.131603 systemd[1]: Reached target timers.target. Dec 13 03:50:38.132465 systemd[1]: Listening on dbus.socket. Dec 13 03:50:39.029006 systemd-resolved[1084]: Clock change detected. Flushing caches. Dec 13 03:50:39.029046 systemd-timesyncd[1085]: Contacted time server 51.255.95.80:123 (0.flatcar.pool.ntp.org). Dec 13 03:50:39.029087 systemd-timesyncd[1085]: Initial clock synchronization to Fri 2024-12-13 03:50:39.028969 UTC. Dec 13 03:50:39.030265 systemd[1]: Starting docker.socket... Dec 13 03:50:39.033987 systemd[1]: Listening on sshd.socket. Dec 13 03:50:39.034517 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:50:39.034963 systemd[1]: Listening on docker.socket. Dec 13 03:50:39.035542 systemd[1]: Reached target sockets.target. Dec 13 03:50:39.035952 systemd[1]: Reached target basic.target. Dec 13 03:50:39.036438 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 03:50:39.036467 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 03:50:39.037427 systemd[1]: Starting containerd.service... Dec 13 03:50:39.038784 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 03:50:39.040745 systemd[1]: Starting dbus.service... Dec 13 03:50:39.043082 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 03:50:39.049398 systemd[1]: Starting extend-filesystems.service... Dec 13 03:50:39.050453 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 03:50:39.053904 jq[1123]: false Dec 13 03:50:39.055367 systemd[1]: Starting kubelet.service... Dec 13 03:50:39.057026 systemd[1]: Starting motdgen.service... Dec 13 03:50:39.058560 systemd[1]: Starting prepare-helm.service... Dec 13 03:50:39.061152 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 03:50:39.063968 systemd[1]: Starting sshd-keygen.service... Dec 13 03:50:39.070386 systemd[1]: Starting systemd-logind.service... Dec 13 03:50:39.070861 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:50:39.070913 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 03:50:39.071408 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 03:50:39.085021 jq[1133]: true Dec 13 03:50:39.073018 systemd[1]: Starting update-engine.service... Dec 13 03:50:39.074433 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 03:50:39.078919 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 03:50:39.079144 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 03:50:39.095576 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 03:50:39.095763 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 03:50:39.119022 tar[1139]: linux-amd64/helm Dec 13 03:50:39.121528 jq[1143]: true Dec 13 03:50:39.161800 extend-filesystems[1124]: Found loop1 Dec 13 03:50:39.161800 extend-filesystems[1124]: Found vda Dec 13 03:50:39.161800 extend-filesystems[1124]: Found vda1 Dec 13 03:50:39.161800 extend-filesystems[1124]: Found vda2 Dec 13 03:50:39.161800 extend-filesystems[1124]: Found vda3 Dec 13 03:50:39.164886 extend-filesystems[1124]: Found usr Dec 13 03:50:39.164886 extend-filesystems[1124]: Found vda4 Dec 13 03:50:39.164886 extend-filesystems[1124]: Found vda6 Dec 13 03:50:39.164886 extend-filesystems[1124]: Found vda7 Dec 13 03:50:39.164886 extend-filesystems[1124]: Found vda9 Dec 13 03:50:39.164886 extend-filesystems[1124]: Checking size of /dev/vda9 Dec 13 03:50:39.164411 systemd[1]: Started dbus.service. Dec 13 03:50:39.164226 dbus-daemon[1120]: [system] SELinux support is enabled Dec 13 03:50:39.167339 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 03:50:39.167364 systemd[1]: Reached target system-config.target. Dec 13 03:50:39.169470 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 03:50:39.169487 systemd[1]: Reached target user-config.target. Dec 13 03:50:39.175866 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 03:50:39.176050 systemd[1]: Finished motdgen.service. Dec 13 03:50:39.198831 extend-filesystems[1124]: Resized partition /dev/vda9 Dec 13 03:50:39.212141 extend-filesystems[1175]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 03:50:39.251230 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Dec 13 03:50:39.275008 env[1144]: time="2024-12-13T03:50:39.274931808Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 03:50:39.321201 env[1144]: time="2024-12-13T03:50:39.311186334Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 03:50:39.321201 env[1144]: time="2024-12-13T03:50:39.320839789Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 03:50:39.291283 systemd[1]: Started update-engine.service. Dec 13 03:50:39.321433 update_engine[1132]: I1213 03:50:39.282862 1132 main.cc:92] Flatcar Update Engine starting Dec 13 03:50:39.321433 update_engine[1132]: I1213 03:50:39.295249 1132 update_check_scheduler.cc:74] Next update check in 2m56s Dec 13 03:50:39.294370 systemd[1]: Started locksmithd.service. Dec 13 03:50:39.320543 systemd-logind[1131]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 03:50:39.320571 systemd-logind[1131]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 03:50:39.323082 systemd-logind[1131]: New seat seat0. Dec 13 03:50:39.329190 env[1144]: time="2024-12-13T03:50:39.327943251Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 03:50:39.329190 env[1144]: time="2024-12-13T03:50:39.327986673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 03:50:39.329190 env[1144]: time="2024-12-13T03:50:39.328282217Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 03:50:39.329190 env[1144]: time="2024-12-13T03:50:39.328336990Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 03:50:39.329190 env[1144]: time="2024-12-13T03:50:39.328355214Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 03:50:39.329190 env[1144]: time="2024-12-13T03:50:39.328367918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 03:50:39.329190 env[1144]: time="2024-12-13T03:50:39.328505325Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 03:50:39.328794 systemd[1]: Started systemd-logind.service. Dec 13 03:50:39.329918 env[1144]: time="2024-12-13T03:50:39.329567988Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 03:50:39.330659 env[1144]: time="2024-12-13T03:50:39.330612918Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 03:50:39.335194 env[1144]: time="2024-12-13T03:50:39.335169314Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 03:50:39.341798 env[1144]: time="2024-12-13T03:50:39.341755787Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 03:50:39.341968 env[1144]: time="2024-12-13T03:50:39.341950382Z" level=info msg="metadata content store policy set" policy=shared Dec 13 03:50:39.355137 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Dec 13 03:50:39.465347 bash[1174]: Updated "/home/core/.ssh/authorized_keys" Dec 13 03:50:39.466244 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 03:50:39.469663 extend-filesystems[1175]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 03:50:39.469663 extend-filesystems[1175]: old_desc_blocks = 1, new_desc_blocks = 3 Dec 13 03:50:39.469663 extend-filesystems[1175]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Dec 13 03:50:39.468496 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 03:50:39.488623 env[1144]: time="2024-12-13T03:50:39.480166606Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 03:50:39.488623 env[1144]: time="2024-12-13T03:50:39.480292482Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 03:50:39.488623 env[1144]: time="2024-12-13T03:50:39.480372332Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 03:50:39.488623 env[1144]: time="2024-12-13T03:50:39.480565204Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 03:50:39.488623 env[1144]: time="2024-12-13T03:50:39.480746434Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 03:50:39.488623 env[1144]: time="2024-12-13T03:50:39.480792189Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 03:50:39.488623 env[1144]: time="2024-12-13T03:50:39.480829810Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 03:50:39.488623 env[1144]: time="2024-12-13T03:50:39.480911964Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 03:50:39.488623 env[1144]: time="2024-12-13T03:50:39.480951618Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 03:50:39.488623 env[1144]: time="2024-12-13T03:50:39.480990862Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 03:50:39.488623 env[1144]: time="2024-12-13T03:50:39.481026018Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 03:50:39.488623 env[1144]: time="2024-12-13T03:50:39.481059431Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 03:50:39.488623 env[1144]: time="2024-12-13T03:50:39.481383188Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 03:50:39.488623 env[1144]: time="2024-12-13T03:50:39.481648666Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 03:50:39.489600 extend-filesystems[1124]: Resized filesystem in /dev/vda9 Dec 13 03:50:39.468858 systemd[1]: Finished extend-filesystems.service. Dec 13 03:50:39.498664 env[1144]: time="2024-12-13T03:50:39.482616571Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 03:50:39.498664 env[1144]: time="2024-12-13T03:50:39.482693034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 03:50:39.498664 env[1144]: time="2024-12-13T03:50:39.482743940Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 03:50:39.498664 env[1144]: time="2024-12-13T03:50:39.482870187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 03:50:39.498664 env[1144]: time="2024-12-13T03:50:39.483024055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 03:50:39.498664 env[1144]: time="2024-12-13T03:50:39.483065313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 03:50:39.498664 env[1144]: time="2024-12-13T03:50:39.483099387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 03:50:39.498664 env[1144]: time="2024-12-13T03:50:39.483201198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 03:50:39.498664 env[1144]: time="2024-12-13T03:50:39.483236023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 03:50:39.498664 env[1144]: time="2024-12-13T03:50:39.483267041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 03:50:39.498664 env[1144]: time="2024-12-13T03:50:39.483297989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 03:50:39.498664 env[1144]: time="2024-12-13T03:50:39.483334057Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 03:50:39.498664 env[1144]: time="2024-12-13T03:50:39.483718488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 03:50:39.498664 env[1144]: time="2024-12-13T03:50:39.483764694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 03:50:39.498664 env[1144]: time="2024-12-13T03:50:39.483797786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 03:50:39.497400 systemd[1]: Started containerd.service. Dec 13 03:50:39.503947 env[1144]: time="2024-12-13T03:50:39.483855134Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 03:50:39.503947 env[1144]: time="2024-12-13T03:50:39.483896041Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 03:50:39.503947 env[1144]: time="2024-12-13T03:50:39.483926338Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 03:50:39.503947 env[1144]: time="2024-12-13T03:50:39.483978065Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 03:50:39.503947 env[1144]: time="2024-12-13T03:50:39.484081128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 03:50:39.504143 env[1144]: time="2024-12-13T03:50:39.484627422Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 03:50:39.504143 env[1144]: time="2024-12-13T03:50:39.484787583Z" level=info msg="Connect containerd service" Dec 13 03:50:39.504143 env[1144]: time="2024-12-13T03:50:39.484852334Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 03:50:39.504143 env[1144]: time="2024-12-13T03:50:39.496272783Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 03:50:39.504143 env[1144]: time="2024-12-13T03:50:39.497002101Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 03:50:39.504143 env[1144]: time="2024-12-13T03:50:39.497097320Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 03:50:39.504143 env[1144]: time="2024-12-13T03:50:39.497279191Z" level=info msg="containerd successfully booted in 0.226697s" Dec 13 03:50:39.504143 env[1144]: time="2024-12-13T03:50:39.498574440Z" level=info msg="Start subscribing containerd event" Dec 13 03:50:39.504143 env[1144]: time="2024-12-13T03:50:39.498700927Z" level=info msg="Start recovering state" Dec 13 03:50:39.504143 env[1144]: time="2024-12-13T03:50:39.498853784Z" level=info msg="Start event monitor" Dec 13 03:50:39.504143 env[1144]: time="2024-12-13T03:50:39.498912604Z" level=info msg="Start snapshots syncer" Dec 13 03:50:39.504143 env[1144]: time="2024-12-13T03:50:39.499056043Z" level=info msg="Start cni network conf syncer for default" Dec 13 03:50:39.504143 env[1144]: time="2024-12-13T03:50:39.499092231Z" level=info msg="Start streaming server" Dec 13 03:50:40.272148 tar[1139]: linux-amd64/LICENSE Dec 13 03:50:40.273582 tar[1139]: linux-amd64/README.md Dec 13 03:50:40.278632 systemd[1]: Finished prepare-helm.service. Dec 13 03:50:40.295263 locksmithd[1179]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 03:50:40.973938 sshd_keygen[1151]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 03:50:41.023746 systemd[1]: Finished sshd-keygen.service. Dec 13 03:50:41.028717 systemd[1]: Starting issuegen.service... Dec 13 03:50:41.034564 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 03:50:41.034916 systemd[1]: Finished issuegen.service. Dec 13 03:50:41.039363 systemd[1]: Starting systemd-user-sessions.service... Dec 13 03:50:41.048625 systemd[1]: Finished systemd-user-sessions.service. Dec 13 03:50:41.050797 systemd[1]: Started getty@tty1.service. Dec 13 03:50:41.052593 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 03:50:41.053222 systemd[1]: Reached target getty.target. Dec 13 03:50:41.077967 systemd[1]: Started kubelet.service. Dec 13 03:50:41.656352 systemd[1]: Created slice system-sshd.slice. Dec 13 03:50:41.659976 systemd[1]: Started sshd@0-172.24.4.174:22-172.24.4.1:55334.service. Dec 13 03:50:42.963316 kubelet[1204]: E1213 03:50:42.963160 1204 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 03:50:42.967707 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 03:50:42.968001 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 03:50:42.968743 systemd[1]: kubelet.service: Consumed 1.803s CPU time. Dec 13 03:50:43.067348 sshd[1210]: Accepted publickey for core from 172.24.4.1 port 55334 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:50:43.072082 sshd[1210]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:50:43.098667 systemd[1]: Created slice user-500.slice. Dec 13 03:50:43.102561 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 03:50:43.111343 systemd-logind[1131]: New session 1 of user core. Dec 13 03:50:43.129029 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 03:50:43.133930 systemd[1]: Starting user@500.service... Dec 13 03:50:43.150325 (systemd)[1216]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:50:43.343543 systemd[1216]: Queued start job for default target default.target. Dec 13 03:50:43.344915 systemd[1216]: Reached target paths.target. Dec 13 03:50:43.344974 systemd[1216]: Reached target sockets.target. Dec 13 03:50:43.345009 systemd[1216]: Reached target timers.target. Dec 13 03:50:43.345042 systemd[1216]: Reached target basic.target. Dec 13 03:50:43.345190 systemd[1216]: Reached target default.target. Dec 13 03:50:43.345256 systemd[1216]: Startup finished in 181ms. Dec 13 03:50:43.345334 systemd[1]: Started user@500.service. Dec 13 03:50:43.349425 systemd[1]: Started session-1.scope. Dec 13 03:50:43.786039 systemd[1]: Started sshd@1-172.24.4.174:22-172.24.4.1:55336.service. Dec 13 03:50:46.033090 sshd[1225]: Accepted publickey for core from 172.24.4.1 port 55336 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:50:46.035839 sshd[1225]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:50:46.046228 systemd-logind[1131]: New session 2 of user core. Dec 13 03:50:46.047890 systemd[1]: Started session-2.scope. Dec 13 03:50:46.203990 coreos-metadata[1119]: Dec 13 03:50:46.198 WARN failed to locate config-drive, using the metadata service API instead Dec 13 03:50:46.292708 coreos-metadata[1119]: Dec 13 03:50:46.292 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 03:50:46.504825 coreos-metadata[1119]: Dec 13 03:50:46.504 INFO Fetch successful Dec 13 03:50:46.505256 coreos-metadata[1119]: Dec 13 03:50:46.505 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 03:50:46.518975 coreos-metadata[1119]: Dec 13 03:50:46.518 INFO Fetch successful Dec 13 03:50:46.524052 unknown[1119]: wrote ssh authorized keys file for user: core Dec 13 03:50:46.550017 update-ssh-keys[1231]: Updated "/home/core/.ssh/authorized_keys" Dec 13 03:50:46.550709 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 03:50:46.551627 systemd[1]: Reached target multi-user.target. Dec 13 03:50:46.557074 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 03:50:46.570774 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 03:50:46.571230 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 03:50:46.572442 systemd[1]: Startup finished in 1.038s (kernel) + 8.468s (initrd) + 14.412s (userspace) = 23.919s. Dec 13 03:50:46.694720 sshd[1225]: pam_unix(sshd:session): session closed for user core Dec 13 03:50:46.702481 systemd[1]: sshd@1-172.24.4.174:22-172.24.4.1:55336.service: Deactivated successfully. Dec 13 03:50:46.703808 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 03:50:46.705309 systemd-logind[1131]: Session 2 logged out. Waiting for processes to exit. Dec 13 03:50:46.707881 systemd[1]: Started sshd@2-172.24.4.174:22-172.24.4.1:57352.service. Dec 13 03:50:46.712001 systemd-logind[1131]: Removed session 2. Dec 13 03:50:48.110014 sshd[1235]: Accepted publickey for core from 172.24.4.1 port 57352 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:50:48.112683 sshd[1235]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:50:48.123778 systemd[1]: Started session-3.scope. Dec 13 03:50:48.124830 systemd-logind[1131]: New session 3 of user core. Dec 13 03:50:48.752702 sshd[1235]: pam_unix(sshd:session): session closed for user core Dec 13 03:50:48.758650 systemd[1]: sshd@2-172.24.4.174:22-172.24.4.1:57352.service: Deactivated successfully. Dec 13 03:50:48.760231 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 03:50:48.761635 systemd-logind[1131]: Session 3 logged out. Waiting for processes to exit. Dec 13 03:50:48.763907 systemd-logind[1131]: Removed session 3. Dec 13 03:50:53.221275 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 03:50:53.221725 systemd[1]: Stopped kubelet.service. Dec 13 03:50:53.221807 systemd[1]: kubelet.service: Consumed 1.803s CPU time. Dec 13 03:50:53.225411 systemd[1]: Starting kubelet.service... Dec 13 03:50:53.473238 systemd[1]: Started kubelet.service. Dec 13 03:50:53.874878 kubelet[1244]: E1213 03:50:53.874727 1244 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 03:50:53.881490 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 03:50:53.881735 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 03:50:58.763646 systemd[1]: Started sshd@3-172.24.4.174:22-172.24.4.1:37324.service. Dec 13 03:50:59.800414 sshd[1252]: Accepted publickey for core from 172.24.4.1 port 37324 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:50:59.803522 sshd[1252]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:50:59.813072 systemd-logind[1131]: New session 4 of user core. Dec 13 03:50:59.813895 systemd[1]: Started session-4.scope. Dec 13 03:51:00.415152 sshd[1252]: pam_unix(sshd:session): session closed for user core Dec 13 03:51:00.422406 systemd[1]: Started sshd@4-172.24.4.174:22-172.24.4.1:37328.service. Dec 13 03:51:00.423601 systemd[1]: sshd@3-172.24.4.174:22-172.24.4.1:37324.service: Deactivated successfully. Dec 13 03:51:00.425001 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 03:51:00.429777 systemd-logind[1131]: Session 4 logged out. Waiting for processes to exit. Dec 13 03:51:00.432762 systemd-logind[1131]: Removed session 4. Dec 13 03:51:01.524054 sshd[1257]: Accepted publickey for core from 172.24.4.1 port 37328 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:51:01.527262 sshd[1257]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:51:01.544853 systemd-logind[1131]: New session 5 of user core. Dec 13 03:51:01.545081 systemd[1]: Started session-5.scope. Dec 13 03:51:02.095827 sshd[1257]: pam_unix(sshd:session): session closed for user core Dec 13 03:51:02.103672 systemd[1]: Started sshd@5-172.24.4.174:22-172.24.4.1:37330.service. Dec 13 03:51:02.104882 systemd[1]: sshd@4-172.24.4.174:22-172.24.4.1:37328.service: Deactivated successfully. Dec 13 03:51:02.108275 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 03:51:02.110758 systemd-logind[1131]: Session 5 logged out. Waiting for processes to exit. Dec 13 03:51:02.114004 systemd-logind[1131]: Removed session 5. Dec 13 03:51:03.373300 sshd[1263]: Accepted publickey for core from 172.24.4.1 port 37330 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:51:03.376420 sshd[1263]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:51:03.386872 systemd[1]: Started session-6.scope. Dec 13 03:51:03.388696 systemd-logind[1131]: New session 6 of user core. Dec 13 03:51:03.945849 sshd[1263]: pam_unix(sshd:session): session closed for user core Dec 13 03:51:03.952496 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 03:51:03.952862 systemd[1]: Stopped kubelet.service. Dec 13 03:51:03.955782 systemd[1]: Starting kubelet.service... Dec 13 03:51:03.959491 systemd[1]: Started sshd@6-172.24.4.174:22-172.24.4.1:37344.service. Dec 13 03:51:03.961011 systemd[1]: sshd@5-172.24.4.174:22-172.24.4.1:37330.service: Deactivated successfully. Dec 13 03:51:03.963922 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 03:51:03.970525 systemd-logind[1131]: Session 6 logged out. Waiting for processes to exit. Dec 13 03:51:03.973405 systemd-logind[1131]: Removed session 6. Dec 13 03:51:04.256590 systemd[1]: Started kubelet.service. Dec 13 03:51:04.522497 kubelet[1276]: E1213 03:51:04.522293 1276 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 03:51:04.526337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 03:51:04.526633 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 03:51:05.322357 sshd[1270]: Accepted publickey for core from 172.24.4.1 port 37344 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:51:05.325515 sshd[1270]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:51:05.334224 systemd-logind[1131]: New session 7 of user core. Dec 13 03:51:05.336008 systemd[1]: Started session-7.scope. Dec 13 03:51:05.910685 sudo[1283]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 03:51:05.911178 sudo[1283]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 03:51:05.959435 systemd[1]: Starting docker.service... Dec 13 03:51:06.027597 env[1293]: time="2024-12-13T03:51:06.027528617Z" level=info msg="Starting up" Dec 13 03:51:06.028724 env[1293]: time="2024-12-13T03:51:06.028686969Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 03:51:06.028724 env[1293]: time="2024-12-13T03:51:06.028708941Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 03:51:06.028890 env[1293]: time="2024-12-13T03:51:06.028730481Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 03:51:06.028890 env[1293]: time="2024-12-13T03:51:06.028743866Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 03:51:06.030875 env[1293]: time="2024-12-13T03:51:06.030816503Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 03:51:06.030875 env[1293]: time="2024-12-13T03:51:06.030838384Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 03:51:06.030875 env[1293]: time="2024-12-13T03:51:06.030854705Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 03:51:06.030875 env[1293]: time="2024-12-13T03:51:06.030864784Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 03:51:06.048828 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4166637632-merged.mount: Deactivated successfully. Dec 13 03:51:06.083509 env[1293]: time="2024-12-13T03:51:06.083473742Z" level=info msg="Loading containers: start." Dec 13 03:51:06.254156 kernel: Initializing XFRM netlink socket Dec 13 03:51:06.332841 env[1293]: time="2024-12-13T03:51:06.331379489Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 03:51:06.423907 systemd-networkd[976]: docker0: Link UP Dec 13 03:51:06.437109 env[1293]: time="2024-12-13T03:51:06.437064341Z" level=info msg="Loading containers: done." Dec 13 03:51:06.453383 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck391582298-merged.mount: Deactivated successfully. Dec 13 03:51:06.469376 env[1293]: time="2024-12-13T03:51:06.469342509Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 03:51:06.469697 env[1293]: time="2024-12-13T03:51:06.469677236Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 03:51:06.469872 env[1293]: time="2024-12-13T03:51:06.469855922Z" level=info msg="Daemon has completed initialization" Dec 13 03:51:06.492889 systemd[1]: Started docker.service. Dec 13 03:51:06.503773 env[1293]: time="2024-12-13T03:51:06.503720074Z" level=info msg="API listen on /run/docker.sock" Dec 13 03:51:08.499958 env[1144]: time="2024-12-13T03:51:08.499866897Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 03:51:09.279055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1289559422.mount: Deactivated successfully. Dec 13 03:51:12.226082 env[1144]: time="2024-12-13T03:51:12.226006729Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:12.229323 env[1144]: time="2024-12-13T03:51:12.229300855Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:12.233063 env[1144]: time="2024-12-13T03:51:12.232979795Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:12.235933 env[1144]: time="2024-12-13T03:51:12.235912434Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:12.237429 env[1144]: time="2024-12-13T03:51:12.237404032Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 03:51:12.249051 env[1144]: time="2024-12-13T03:51:12.249019521Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 03:51:14.777765 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 03:51:14.778012 systemd[1]: Stopped kubelet.service. Dec 13 03:51:14.779516 systemd[1]: Starting kubelet.service... Dec 13 03:51:14.905167 systemd[1]: Started kubelet.service. Dec 13 03:51:15.443617 kubelet[1435]: E1213 03:51:15.443564 1435 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 03:51:15.446817 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 03:51:15.446944 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 03:51:16.135286 env[1144]: time="2024-12-13T03:51:16.135211975Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:16.139421 env[1144]: time="2024-12-13T03:51:16.139368077Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:16.143665 env[1144]: time="2024-12-13T03:51:16.143613487Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:16.150204 env[1144]: time="2024-12-13T03:51:16.150151456Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:16.153824 env[1144]: time="2024-12-13T03:51:16.152793866Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 03:51:16.176784 env[1144]: time="2024-12-13T03:51:16.176723136Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 03:51:18.176054 env[1144]: time="2024-12-13T03:51:18.175989828Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:18.179097 env[1144]: time="2024-12-13T03:51:18.179037500Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:18.183048 env[1144]: time="2024-12-13T03:51:18.183004803Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:18.184739 env[1144]: time="2024-12-13T03:51:18.184689604Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:18.186205 env[1144]: time="2024-12-13T03:51:18.186155971Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 03:51:18.197654 env[1144]: time="2024-12-13T03:51:18.197568645Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 03:51:20.291794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1894263881.mount: Deactivated successfully. Dec 13 03:51:21.810877 env[1144]: time="2024-12-13T03:51:21.810693760Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:21.814950 env[1144]: time="2024-12-13T03:51:21.814884643Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:21.818256 env[1144]: time="2024-12-13T03:51:21.818201664Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:21.821434 env[1144]: time="2024-12-13T03:51:21.821363742Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:21.823898 env[1144]: time="2024-12-13T03:51:21.823834634Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 03:51:21.854429 env[1144]: time="2024-12-13T03:51:21.854363823Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 03:51:22.470921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3901546535.mount: Deactivated successfully. Dec 13 03:51:24.604859 env[1144]: time="2024-12-13T03:51:24.604750495Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:24.609788 env[1144]: time="2024-12-13T03:51:24.609709227Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:24.615140 env[1144]: time="2024-12-13T03:51:24.615031635Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:24.619771 env[1144]: time="2024-12-13T03:51:24.619703305Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:24.622387 env[1144]: time="2024-12-13T03:51:24.622288856Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 03:51:24.645645 env[1144]: time="2024-12-13T03:51:24.645523593Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 03:51:24.667919 update_engine[1132]: I1213 03:51:24.667834 1132 update_attempter.cc:509] Updating boot flags... Dec 13 03:51:25.267360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2084699823.mount: Deactivated successfully. Dec 13 03:51:25.286634 env[1144]: time="2024-12-13T03:51:25.286541072Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:25.294143 env[1144]: time="2024-12-13T03:51:25.294029144Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:25.299739 env[1144]: time="2024-12-13T03:51:25.299667043Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:25.303683 env[1144]: time="2024-12-13T03:51:25.303603372Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:25.305533 env[1144]: time="2024-12-13T03:51:25.305430790Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 03:51:25.329082 env[1144]: time="2024-12-13T03:51:25.329000090Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 03:51:25.651324 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 03:51:25.651914 systemd[1]: Stopped kubelet.service. Dec 13 03:51:25.654966 systemd[1]: Starting kubelet.service... Dec 13 03:51:25.776280 systemd[1]: Started kubelet.service. Dec 13 03:51:25.844840 kubelet[1486]: E1213 03:51:25.844750 1486 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 03:51:25.846807 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 03:51:25.847193 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 03:51:26.622599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2812892408.mount: Deactivated successfully. Dec 13 03:51:30.565714 env[1144]: time="2024-12-13T03:51:30.565672171Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:30.569437 env[1144]: time="2024-12-13T03:51:30.569411726Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:30.575960 env[1144]: time="2024-12-13T03:51:30.575891313Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:30.582772 env[1144]: time="2024-12-13T03:51:30.582731059Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 03:51:30.583393 env[1144]: time="2024-12-13T03:51:30.583370433Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:34.785443 systemd[1]: Stopped kubelet.service. Dec 13 03:51:34.791154 systemd[1]: Starting kubelet.service... Dec 13 03:51:34.840504 systemd[1]: Reloading. Dec 13 03:51:34.931283 /usr/lib/systemd/system-generators/torcx-generator[1580]: time="2024-12-13T03:51:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 03:51:34.931317 /usr/lib/systemd/system-generators/torcx-generator[1580]: time="2024-12-13T03:51:34Z" level=info msg="torcx already run" Dec 13 03:51:35.150437 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 03:51:35.150724 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 03:51:35.209800 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 03:51:35.358808 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 03:51:35.359324 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 03:51:35.359946 systemd[1]: Stopped kubelet.service. Dec 13 03:51:35.363956 systemd[1]: Starting kubelet.service... Dec 13 03:51:36.098482 systemd[1]: Started kubelet.service. Dec 13 03:51:36.163382 kubelet[1631]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 03:51:36.163382 kubelet[1631]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 03:51:36.163382 kubelet[1631]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 03:51:36.291431 kubelet[1631]: I1213 03:51:36.291329 1631 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 03:51:36.818504 kubelet[1631]: I1213 03:51:36.818436 1631 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 03:51:36.818504 kubelet[1631]: I1213 03:51:36.818494 1631 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 03:51:36.818976 kubelet[1631]: I1213 03:51:36.818947 1631 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 03:51:36.866090 kubelet[1631]: I1213 03:51:36.866065 1631 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 03:51:36.998757 kubelet[1631]: E1213 03:51:36.998722 1631 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.174:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.174:6443: connect: connection refused Dec 13 03:51:37.013377 kubelet[1631]: I1213 03:51:37.013338 1631 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 03:51:37.017659 kubelet[1631]: I1213 03:51:37.017602 1631 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 03:51:37.018215 kubelet[1631]: I1213 03:51:37.017815 1631 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-6-5-5611054123.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 03:51:37.018496 kubelet[1631]: I1213 03:51:37.018469 1631 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 03:51:37.018631 kubelet[1631]: I1213 03:51:37.018612 1631 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 03:51:37.018945 kubelet[1631]: I1213 03:51:37.018920 1631 state_mem.go:36] "Initialized new in-memory state store" Dec 13 03:51:37.021040 kubelet[1631]: I1213 03:51:37.021013 1631 kubelet.go:400] "Attempting to sync node with API server" Dec 13 03:51:37.021240 kubelet[1631]: I1213 03:51:37.021217 1631 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 03:51:37.021406 kubelet[1631]: I1213 03:51:37.021386 1631 kubelet.go:312] "Adding apiserver pod source" Dec 13 03:51:37.021547 kubelet[1631]: I1213 03:51:37.021526 1631 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 03:51:37.051445 kubelet[1631]: W1213 03:51:37.051341 1631 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.174:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-5-5611054123.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.174:6443: connect: connection refused Dec 13 03:51:37.051755 kubelet[1631]: E1213 03:51:37.051725 1631 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.174:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-5-5611054123.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.174:6443: connect: connection refused Dec 13 03:51:37.055632 kubelet[1631]: I1213 03:51:37.055599 1631 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 03:51:37.060883 kubelet[1631]: I1213 03:51:37.060852 1631 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 03:51:37.061200 kubelet[1631]: W1213 03:51:37.061176 1631 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 03:51:37.062675 kubelet[1631]: I1213 03:51:37.062649 1631 server.go:1264] "Started kubelet" Dec 13 03:51:37.063203 kubelet[1631]: W1213 03:51:37.063084 1631 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.174:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.174:6443: connect: connection refused Dec 13 03:51:37.063496 kubelet[1631]: E1213 03:51:37.063430 1631 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.174:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.174:6443: connect: connection refused Dec 13 03:51:37.064409 kubelet[1631]: I1213 03:51:37.064341 1631 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 03:51:37.066515 kubelet[1631]: I1213 03:51:37.066475 1631 server.go:455] "Adding debug handlers to kubelet server" Dec 13 03:51:37.076696 kubelet[1631]: I1213 03:51:37.073366 1631 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 03:51:37.077425 kubelet[1631]: I1213 03:51:37.077393 1631 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 03:51:37.077998 kubelet[1631]: E1213 03:51:37.077789 1631 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.174:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.174:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-6-5-5611054123.novalocal.1810a01ba0f4e5e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-6-5-5611054123.novalocal,UID:ci-3510-3-6-5-5611054123.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-6-5-5611054123.novalocal,},FirstTimestamp:2024-12-13 03:51:37.062606311 +0000 UTC m=+0.952781983,LastTimestamp:2024-12-13 03:51:37.062606311 +0000 UTC m=+0.952781983,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-6-5-5611054123.novalocal,}" Dec 13 03:51:37.085429 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 03:51:37.085700 kubelet[1631]: I1213 03:51:37.085650 1631 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 03:51:37.094811 kubelet[1631]: E1213 03:51:37.094769 1631 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 03:51:37.096374 kubelet[1631]: E1213 03:51:37.096310 1631 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510-3-6-5-5611054123.novalocal\" not found" Dec 13 03:51:37.096626 kubelet[1631]: I1213 03:51:37.096600 1631 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 03:51:37.096944 kubelet[1631]: I1213 03:51:37.096914 1631 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 03:51:37.097220 kubelet[1631]: I1213 03:51:37.097195 1631 reconciler.go:26] "Reconciler: start to sync state" Dec 13 03:51:37.099747 kubelet[1631]: W1213 03:51:37.098903 1631 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.174:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.174:6443: connect: connection refused Dec 13 03:51:37.099891 kubelet[1631]: E1213 03:51:37.099093 1631 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-5-5611054123.novalocal?timeout=10s\": dial tcp 172.24.4.174:6443: connect: connection refused" interval="200ms" Dec 13 03:51:37.099891 kubelet[1631]: E1213 03:51:37.099833 1631 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.174:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.174:6443: connect: connection refused Dec 13 03:51:37.099891 kubelet[1631]: I1213 03:51:37.099611 1631 factory.go:221] Registration of the systemd container factory successfully Dec 13 03:51:37.100161 kubelet[1631]: I1213 03:51:37.100072 1631 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 03:51:37.102785 kubelet[1631]: I1213 03:51:37.102739 1631 factory.go:221] Registration of the containerd container factory successfully Dec 13 03:51:37.145747 kubelet[1631]: I1213 03:51:37.145709 1631 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 03:51:37.145747 kubelet[1631]: I1213 03:51:37.145728 1631 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 03:51:37.145747 kubelet[1631]: I1213 03:51:37.145743 1631 state_mem.go:36] "Initialized new in-memory state store" Dec 13 03:51:37.146030 kubelet[1631]: I1213 03:51:37.145990 1631 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 03:51:37.148191 kubelet[1631]: I1213 03:51:37.148175 1631 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 03:51:37.148280 kubelet[1631]: I1213 03:51:37.148270 1631 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 03:51:37.148366 kubelet[1631]: I1213 03:51:37.148355 1631 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 03:51:37.148474 kubelet[1631]: E1213 03:51:37.148457 1631 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 03:51:37.150803 kubelet[1631]: I1213 03:51:37.150789 1631 policy_none.go:49] "None policy: Start" Dec 13 03:51:37.151653 kubelet[1631]: I1213 03:51:37.151640 1631 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 03:51:37.151757 kubelet[1631]: I1213 03:51:37.151748 1631 state_mem.go:35] "Initializing new in-memory state store" Dec 13 03:51:37.157669 kubelet[1631]: W1213 03:51:37.157619 1631 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.174:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.174:6443: connect: connection refused Dec 13 03:51:37.157669 kubelet[1631]: E1213 03:51:37.157673 1631 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.174:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.174:6443: connect: connection refused Dec 13 03:51:37.160351 systemd[1]: Created slice kubepods.slice. Dec 13 03:51:37.165206 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 03:51:37.168012 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 03:51:37.175054 kubelet[1631]: I1213 03:51:37.174937 1631 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 03:51:37.175357 kubelet[1631]: I1213 03:51:37.175280 1631 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 03:51:37.176500 kubelet[1631]: I1213 03:51:37.176414 1631 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 03:51:37.176980 kubelet[1631]: E1213 03:51:37.176966 1631 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-6-5-5611054123.novalocal\" not found" Dec 13 03:51:37.198952 kubelet[1631]: I1213 03:51:37.198933 1631 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:37.199364 kubelet[1631]: E1213 03:51:37.199341 1631 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.174:6443/api/v1/nodes\": dial tcp 172.24.4.174:6443: connect: connection refused" node="ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:37.248998 kubelet[1631]: I1213 03:51:37.248941 1631 topology_manager.go:215] "Topology Admit Handler" podUID="a9e0c6b1a9e9a5a7bf5dc2aa132c70d9" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:37.252497 kubelet[1631]: I1213 03:51:37.252451 1631 topology_manager.go:215] "Topology Admit Handler" podUID="2d95e236582cc2c90bec29d167685c23" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:37.256608 kubelet[1631]: I1213 03:51:37.255794 1631 topology_manager.go:215] "Topology Admit Handler" podUID="8623df5c2c435812f100e1606646e241" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:37.273162 systemd[1]: Created slice kubepods-burstable-pod2d95e236582cc2c90bec29d167685c23.slice. Dec 13 03:51:37.291999 systemd[1]: Created slice kubepods-burstable-poda9e0c6b1a9e9a5a7bf5dc2aa132c70d9.slice. Dec 13 03:51:37.301270 kubelet[1631]: I1213 03:51:37.301216 1631 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2d95e236582cc2c90bec29d167685c23-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-6-5-5611054123.novalocal\" (UID: \"2d95e236582cc2c90bec29d167685c23\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:37.301600 kubelet[1631]: I1213 03:51:37.301561 1631 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2d95e236582cc2c90bec29d167685c23-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-6-5-5611054123.novalocal\" (UID: \"2d95e236582cc2c90bec29d167685c23\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:37.301824 kubelet[1631]: I1213 03:51:37.301784 1631 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2d95e236582cc2c90bec29d167685c23-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-6-5-5611054123.novalocal\" (UID: \"2d95e236582cc2c90bec29d167685c23\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:37.302008 kubelet[1631]: I1213 03:51:37.301976 1631 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9e0c6b1a9e9a5a7bf5dc2aa132c70d9-ca-certs\") pod \"kube-apiserver-ci-3510-3-6-5-5611054123.novalocal\" (UID: \"a9e0c6b1a9e9a5a7bf5dc2aa132c70d9\") " pod="kube-system/kube-apiserver-ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:37.302293 kubelet[1631]: I1213 03:51:37.302235 1631 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9e0c6b1a9e9a5a7bf5dc2aa132c70d9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-6-5-5611054123.novalocal\" (UID: \"a9e0c6b1a9e9a5a7bf5dc2aa132c70d9\") " pod="kube-system/kube-apiserver-ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:37.302302 systemd[1]: Created slice kubepods-burstable-pod8623df5c2c435812f100e1606646e241.slice. Dec 13 03:51:37.303461 kubelet[1631]: I1213 03:51:37.303425 1631 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2d95e236582cc2c90bec29d167685c23-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-6-5-5611054123.novalocal\" (UID: \"2d95e236582cc2c90bec29d167685c23\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:37.304553 kubelet[1631]: I1213 03:51:37.304318 1631 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8623df5c2c435812f100e1606646e241-kubeconfig\") pod \"kube-scheduler-ci-3510-3-6-5-5611054123.novalocal\" (UID: \"8623df5c2c435812f100e1606646e241\") " pod="kube-system/kube-scheduler-ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:37.304698 kubelet[1631]: I1213 03:51:37.304630 1631 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9e0c6b1a9e9a5a7bf5dc2aa132c70d9-k8s-certs\") pod \"kube-apiserver-ci-3510-3-6-5-5611054123.novalocal\" (UID: \"a9e0c6b1a9e9a5a7bf5dc2aa132c70d9\") " pod="kube-system/kube-apiserver-ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:37.304773 kubelet[1631]: I1213 03:51:37.304733 1631 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2d95e236582cc2c90bec29d167685c23-ca-certs\") pod \"kube-controller-manager-ci-3510-3-6-5-5611054123.novalocal\" (UID: \"2d95e236582cc2c90bec29d167685c23\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:37.307859 kubelet[1631]: E1213 03:51:37.307800 1631 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-5-5611054123.novalocal?timeout=10s\": dial tcp 172.24.4.174:6443: connect: connection refused" interval="400ms" Dec 13 03:51:37.405509 kubelet[1631]: I1213 03:51:37.405465 1631 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:37.406336 kubelet[1631]: E1213 03:51:37.406287 1631 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.174:6443/api/v1/nodes\": dial tcp 172.24.4.174:6443: connect: connection refused" node="ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:37.585293 env[1144]: time="2024-12-13T03:51:37.585141861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-6-5-5611054123.novalocal,Uid:2d95e236582cc2c90bec29d167685c23,Namespace:kube-system,Attempt:0,}" Dec 13 03:51:37.603278 env[1144]: time="2024-12-13T03:51:37.603160612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-6-5-5611054123.novalocal,Uid:a9e0c6b1a9e9a5a7bf5dc2aa132c70d9,Namespace:kube-system,Attempt:0,}" Dec 13 03:51:37.610812 env[1144]: time="2024-12-13T03:51:37.610315909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-6-5-5611054123.novalocal,Uid:8623df5c2c435812f100e1606646e241,Namespace:kube-system,Attempt:0,}" Dec 13 03:51:37.709636 kubelet[1631]: E1213 03:51:37.708801 1631 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-5-5611054123.novalocal?timeout=10s\": dial tcp 172.24.4.174:6443: connect: connection refused" interval="800ms" Dec 13 03:51:37.811142 kubelet[1631]: I1213 03:51:37.810649 1631 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:37.811828 kubelet[1631]: E1213 03:51:37.811767 1631 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.174:6443/api/v1/nodes\": dial tcp 172.24.4.174:6443: connect: connection refused" node="ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:38.191480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2848536068.mount: Deactivated successfully. Dec 13 03:51:38.200327 env[1144]: time="2024-12-13T03:51:38.200209271Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:38.205515 env[1144]: time="2024-12-13T03:51:38.205451360Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:38.207658 env[1144]: time="2024-12-13T03:51:38.207606773Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:38.218883 env[1144]: time="2024-12-13T03:51:38.218821884Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:38.222372 kubelet[1631]: W1213 03:51:38.222256 1631 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.174:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.174:6443: connect: connection refused Dec 13 03:51:38.222952 kubelet[1631]: E1213 03:51:38.222387 1631 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.174:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.174:6443: connect: connection refused Dec 13 03:51:38.223606 env[1144]: time="2024-12-13T03:51:38.223549684Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:38.230743 env[1144]: time="2024-12-13T03:51:38.230685984Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:38.240817 env[1144]: time="2024-12-13T03:51:38.240764237Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:38.249086 env[1144]: time="2024-12-13T03:51:38.249035451Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:38.251257 env[1144]: time="2024-12-13T03:51:38.251207917Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:38.254446 env[1144]: time="2024-12-13T03:51:38.254392245Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:38.257275 env[1144]: time="2024-12-13T03:51:38.257225444Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:38.259264 env[1144]: time="2024-12-13T03:51:38.259213212Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:51:38.306036 env[1144]: time="2024-12-13T03:51:38.305890200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:51:38.306455 env[1144]: time="2024-12-13T03:51:38.305981982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:51:38.306716 env[1144]: time="2024-12-13T03:51:38.306419154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:51:38.317641 env[1144]: time="2024-12-13T03:51:38.317310195Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2b34365b5f3852d00c394cb31412120ea491b808f28a445c6f4790b6101ca377 pid=1670 runtime=io.containerd.runc.v2 Dec 13 03:51:38.347715 systemd[1]: Started cri-containerd-2b34365b5f3852d00c394cb31412120ea491b808f28a445c6f4790b6101ca377.scope. Dec 13 03:51:38.352903 kubelet[1631]: W1213 03:51:38.352837 1631 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.174:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.174:6443: connect: connection refused Dec 13 03:51:38.352903 kubelet[1631]: E1213 03:51:38.352899 1631 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.174:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.174:6443: connect: connection refused Dec 13 03:51:38.362700 env[1144]: time="2024-12-13T03:51:38.362187329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:51:38.362700 env[1144]: time="2024-12-13T03:51:38.362234959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:51:38.362700 env[1144]: time="2024-12-13T03:51:38.362249937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:51:38.362700 env[1144]: time="2024-12-13T03:51:38.362500238Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d06bec5c10806b2a3de0c50276380bbdb8c5e55ddb2a182e2d849fb19fb20d9 pid=1694 runtime=io.containerd.runc.v2 Dec 13 03:51:38.367069 env[1144]: time="2024-12-13T03:51:38.364769895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:51:38.367069 env[1144]: time="2024-12-13T03:51:38.364815582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:51:38.367069 env[1144]: time="2024-12-13T03:51:38.364830099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:51:38.367069 env[1144]: time="2024-12-13T03:51:38.364950966Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c13a25b54427f1b16f2aa679baea84aa6287b34abd29b11517eee1a656cc69a8 pid=1701 runtime=io.containerd.runc.v2 Dec 13 03:51:38.391936 systemd[1]: Started cri-containerd-c13a25b54427f1b16f2aa679baea84aa6287b34abd29b11517eee1a656cc69a8.scope. Dec 13 03:51:38.409494 systemd[1]: Started cri-containerd-8d06bec5c10806b2a3de0c50276380bbdb8c5e55ddb2a182e2d849fb19fb20d9.scope. Dec 13 03:51:38.443438 env[1144]: time="2024-12-13T03:51:38.443289755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-6-5-5611054123.novalocal,Uid:2d95e236582cc2c90bec29d167685c23,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b34365b5f3852d00c394cb31412120ea491b808f28a445c6f4790b6101ca377\"" Dec 13 03:51:38.449893 env[1144]: time="2024-12-13T03:51:38.449857597Z" level=info msg="CreateContainer within sandbox \"2b34365b5f3852d00c394cb31412120ea491b808f28a445c6f4790b6101ca377\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 03:51:38.480822 env[1144]: time="2024-12-13T03:51:38.480332066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-6-5-5611054123.novalocal,Uid:8623df5c2c435812f100e1606646e241,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d06bec5c10806b2a3de0c50276380bbdb8c5e55ddb2a182e2d849fb19fb20d9\"" Dec 13 03:51:38.483375 env[1144]: time="2024-12-13T03:51:38.483328741Z" level=info msg="CreateContainer within sandbox \"8d06bec5c10806b2a3de0c50276380bbdb8c5e55ddb2a182e2d849fb19fb20d9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 03:51:38.484513 env[1144]: time="2024-12-13T03:51:38.484476189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-6-5-5611054123.novalocal,Uid:a9e0c6b1a9e9a5a7bf5dc2aa132c70d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"c13a25b54427f1b16f2aa679baea84aa6287b34abd29b11517eee1a656cc69a8\"" Dec 13 03:51:38.487712 env[1144]: time="2024-12-13T03:51:38.487667671Z" level=info msg="CreateContainer within sandbox \"c13a25b54427f1b16f2aa679baea84aa6287b34abd29b11517eee1a656cc69a8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 03:51:38.509317 kubelet[1631]: E1213 03:51:38.509264 1631 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-5-5611054123.novalocal?timeout=10s\": dial tcp 172.24.4.174:6443: connect: connection refused" interval="1.6s" Dec 13 03:51:38.528157 kubelet[1631]: W1213 03:51:38.528057 1631 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.174:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.174:6443: connect: connection refused Dec 13 03:51:38.528157 kubelet[1631]: E1213 03:51:38.528159 1631 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.174:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.174:6443: connect: connection refused Dec 13 03:51:38.574631 kubelet[1631]: W1213 03:51:38.574516 1631 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.174:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-5-5611054123.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.174:6443: connect: connection refused Dec 13 03:51:38.574787 kubelet[1631]: E1213 03:51:38.574667 1631 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.174:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-5-5611054123.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.174:6443: connect: connection refused Dec 13 03:51:38.618074 kubelet[1631]: I1213 03:51:38.617449 1631 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:38.618074 kubelet[1631]: E1213 03:51:38.618015 1631 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.174:6443/api/v1/nodes\": dial tcp 172.24.4.174:6443: connect: connection refused" node="ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:38.821408 env[1144]: time="2024-12-13T03:51:38.821090678Z" level=info msg="CreateContainer within sandbox \"8d06bec5c10806b2a3de0c50276380bbdb8c5e55ddb2a182e2d849fb19fb20d9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ce42b5a85769b7639414acc5370561b1de379f0f4f7bdbbebee8c83affc6aa69\"" Dec 13 03:51:38.824436 env[1144]: time="2024-12-13T03:51:38.824360417Z" level=info msg="CreateContainer within sandbox \"2b34365b5f3852d00c394cb31412120ea491b808f28a445c6f4790b6101ca377\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"22d812febc9eba8897eea68708184c5baf26eca912abe97adde39fe27207201d\"" Dec 13 03:51:38.825212 env[1144]: time="2024-12-13T03:51:38.825161503Z" level=info msg="StartContainer for \"ce42b5a85769b7639414acc5370561b1de379f0f4f7bdbbebee8c83affc6aa69\"" Dec 13 03:51:38.828493 env[1144]: time="2024-12-13T03:51:38.828407348Z" level=info msg="CreateContainer within sandbox \"c13a25b54427f1b16f2aa679baea84aa6287b34abd29b11517eee1a656cc69a8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"44493261abfa7fc5152f8daceab98237712b4629e75142bef6b38bdd46ddb3d0\"" Dec 13 03:51:38.829758 env[1144]: time="2024-12-13T03:51:38.829688627Z" level=info msg="StartContainer for \"22d812febc9eba8897eea68708184c5baf26eca912abe97adde39fe27207201d\"" Dec 13 03:51:38.841363 env[1144]: time="2024-12-13T03:51:38.841291858Z" level=info msg="StartContainer for \"44493261abfa7fc5152f8daceab98237712b4629e75142bef6b38bdd46ddb3d0\"" Dec 13 03:51:38.871220 systemd[1]: Started cri-containerd-ce42b5a85769b7639414acc5370561b1de379f0f4f7bdbbebee8c83affc6aa69.scope. Dec 13 03:51:38.902314 systemd[1]: Started cri-containerd-44493261abfa7fc5152f8daceab98237712b4629e75142bef6b38bdd46ddb3d0.scope. Dec 13 03:51:38.905388 systemd[1]: Started cri-containerd-22d812febc9eba8897eea68708184c5baf26eca912abe97adde39fe27207201d.scope. Dec 13 03:51:39.016587 kubelet[1631]: E1213 03:51:39.016523 1631 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.174:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.174:6443: connect: connection refused Dec 13 03:51:39.046546 env[1144]: time="2024-12-13T03:51:39.046487523Z" level=info msg="StartContainer for \"ce42b5a85769b7639414acc5370561b1de379f0f4f7bdbbebee8c83affc6aa69\" returns successfully" Dec 13 03:51:39.047183 env[1144]: time="2024-12-13T03:51:39.047161931Z" level=info msg="StartContainer for \"44493261abfa7fc5152f8daceab98237712b4629e75142bef6b38bdd46ddb3d0\" returns successfully" Dec 13 03:51:39.050186 env[1144]: time="2024-12-13T03:51:39.050157925Z" level=info msg="StartContainer for \"22d812febc9eba8897eea68708184c5baf26eca912abe97adde39fe27207201d\" returns successfully" Dec 13 03:51:40.220929 kubelet[1631]: I1213 03:51:40.220901 1631 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:41.765348 kubelet[1631]: E1213 03:51:41.765318 1631 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-6-5-5611054123.novalocal\" not found" node="ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:41.913738 kubelet[1631]: I1213 03:51:41.913682 1631 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:42.052974 kubelet[1631]: I1213 03:51:42.052868 1631 apiserver.go:52] "Watching apiserver" Dec 13 03:51:42.097550 kubelet[1631]: I1213 03:51:42.097451 1631 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 03:51:44.304206 systemd[1]: Reloading. Dec 13 03:51:44.446223 kubelet[1631]: W1213 03:51:44.446201 1631 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 03:51:44.456077 /usr/lib/systemd/system-generators/torcx-generator[1921]: time="2024-12-13T03:51:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 03:51:44.456416 /usr/lib/systemd/system-generators/torcx-generator[1921]: time="2024-12-13T03:51:44Z" level=info msg="torcx already run" Dec 13 03:51:44.546768 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 03:51:44.546788 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 03:51:44.570541 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 03:51:44.688620 systemd[1]: Stopping kubelet.service... Dec 13 03:51:44.707699 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 03:51:44.707868 systemd[1]: Stopped kubelet.service. Dec 13 03:51:44.707908 systemd[1]: kubelet.service: Consumed 1.433s CPU time. Dec 13 03:51:44.710443 systemd[1]: Starting kubelet.service... Dec 13 03:51:46.794893 systemd[1]: Started kubelet.service. Dec 13 03:51:46.916900 kubelet[1973]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 03:51:46.916900 kubelet[1973]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 03:51:46.916900 kubelet[1973]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 03:51:46.916900 kubelet[1973]: I1213 03:51:46.916035 1973 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 03:51:46.922805 kubelet[1973]: I1213 03:51:46.922741 1973 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 03:51:46.922805 kubelet[1973]: I1213 03:51:46.922770 1973 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 03:51:46.923231 kubelet[1973]: I1213 03:51:46.923039 1973 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 03:51:46.928465 kubelet[1973]: I1213 03:51:46.926126 1973 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 03:51:46.940774 kubelet[1973]: I1213 03:51:46.940692 1973 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 03:51:46.959487 sudo[1987]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 03:51:46.959754 sudo[1987]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 03:51:46.962971 kubelet[1973]: I1213 03:51:46.962897 1973 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 03:51:46.963209 kubelet[1973]: I1213 03:51:46.963167 1973 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 03:51:46.963413 kubelet[1973]: I1213 03:51:46.963205 1973 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-6-5-5611054123.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 03:51:46.967404 kubelet[1973]: I1213 03:51:46.967374 1973 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 03:51:46.967404 kubelet[1973]: I1213 03:51:46.967403 1973 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 03:51:46.967508 kubelet[1973]: I1213 03:51:46.967445 1973 state_mem.go:36] "Initialized new in-memory state store" Dec 13 03:51:46.967577 kubelet[1973]: I1213 03:51:46.967548 1973 kubelet.go:400] "Attempting to sync node with API server" Dec 13 03:51:46.967577 kubelet[1973]: I1213 03:51:46.967570 1973 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 03:51:46.968804 kubelet[1973]: I1213 03:51:46.968178 1973 kubelet.go:312] "Adding apiserver pod source" Dec 13 03:51:46.968910 kubelet[1973]: I1213 03:51:46.968897 1973 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 03:51:46.975383 kubelet[1973]: I1213 03:51:46.975363 1973 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 03:51:46.975660 kubelet[1973]: I1213 03:51:46.975646 1973 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 03:51:46.976193 kubelet[1973]: I1213 03:51:46.976180 1973 server.go:1264] "Started kubelet" Dec 13 03:51:46.979069 kubelet[1973]: I1213 03:51:46.979053 1973 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 03:51:46.991931 kubelet[1973]: E1213 03:51:46.991906 1973 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 03:51:46.993713 kubelet[1973]: I1213 03:51:46.993686 1973 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 03:51:46.994772 kubelet[1973]: I1213 03:51:46.994756 1973 server.go:455] "Adding debug handlers to kubelet server" Dec 13 03:51:46.995624 kubelet[1973]: I1213 03:51:46.995611 1973 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 03:51:46.996202 kubelet[1973]: I1213 03:51:46.996186 1973 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 03:51:46.996474 kubelet[1973]: I1213 03:51:46.996461 1973 reconciler.go:26] "Reconciler: start to sync state" Dec 13 03:51:46.999707 kubelet[1973]: I1213 03:51:46.999562 1973 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 03:51:47.000052 kubelet[1973]: I1213 03:51:47.000035 1973 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 03:51:47.010768 kubelet[1973]: I1213 03:51:47.010745 1973 factory.go:221] Registration of the systemd container factory successfully Dec 13 03:51:47.011573 kubelet[1973]: I1213 03:51:47.011553 1973 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 03:51:47.014866 kubelet[1973]: I1213 03:51:47.014850 1973 factory.go:221] Registration of the containerd container factory successfully Dec 13 03:51:47.033736 kubelet[1973]: I1213 03:51:47.033647 1973 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 03:51:47.035003 kubelet[1973]: I1213 03:51:47.034984 1973 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 03:51:47.035211 kubelet[1973]: I1213 03:51:47.035199 1973 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 03:51:47.035308 kubelet[1973]: I1213 03:51:47.035297 1973 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 03:51:47.035417 kubelet[1973]: E1213 03:51:47.035399 1973 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 03:51:47.082990 kubelet[1973]: I1213 03:51:47.082905 1973 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 03:51:47.082990 kubelet[1973]: I1213 03:51:47.082924 1973 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 03:51:47.082990 kubelet[1973]: I1213 03:51:47.082944 1973 state_mem.go:36] "Initialized new in-memory state store" Dec 13 03:51:47.102122 kubelet[1973]: I1213 03:51:47.102075 1973 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:47.125871 kubelet[1973]: I1213 03:51:47.125786 1973 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 03:51:47.125871 kubelet[1973]: I1213 03:51:47.125829 1973 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 03:51:47.125871 kubelet[1973]: I1213 03:51:47.125872 1973 policy_none.go:49] "None policy: Start" Dec 13 03:51:47.136219 kubelet[1973]: E1213 03:51:47.136178 1973 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 03:51:47.141937 kubelet[1973]: I1213 03:51:47.138360 1973 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 03:51:47.141937 kubelet[1973]: I1213 03:51:47.138400 1973 state_mem.go:35] "Initializing new in-memory state store" Dec 13 03:51:47.141937 kubelet[1973]: I1213 03:51:47.138571 1973 state_mem.go:75] "Updated machine memory state" Dec 13 03:51:47.148410 kubelet[1973]: I1213 03:51:47.148368 1973 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 03:51:47.153336 kubelet[1973]: I1213 03:51:47.153216 1973 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 03:51:47.155259 kubelet[1973]: I1213 03:51:47.155204 1973 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 03:51:47.193043 kubelet[1973]: I1213 03:51:47.192451 1973 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:47.193043 kubelet[1973]: I1213 03:51:47.192623 1973 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:47.336714 kubelet[1973]: I1213 03:51:47.336522 1973 topology_manager.go:215] "Topology Admit Handler" podUID="a9e0c6b1a9e9a5a7bf5dc2aa132c70d9" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:47.337199 kubelet[1973]: I1213 03:51:47.337159 1973 topology_manager.go:215] "Topology Admit Handler" podUID="2d95e236582cc2c90bec29d167685c23" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:47.337558 kubelet[1973]: I1213 03:51:47.337517 1973 topology_manager.go:215] "Topology Admit Handler" podUID="8623df5c2c435812f100e1606646e241" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:47.402732 kubelet[1973]: I1213 03:51:47.402663 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2d95e236582cc2c90bec29d167685c23-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-6-5-5611054123.novalocal\" (UID: \"2d95e236582cc2c90bec29d167685c23\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:47.403224 kubelet[1973]: I1213 03:51:47.403177 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8623df5c2c435812f100e1606646e241-kubeconfig\") pod \"kube-scheduler-ci-3510-3-6-5-5611054123.novalocal\" (UID: \"8623df5c2c435812f100e1606646e241\") " pod="kube-system/kube-scheduler-ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:47.403487 kubelet[1973]: I1213 03:51:47.403449 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9e0c6b1a9e9a5a7bf5dc2aa132c70d9-ca-certs\") pod \"kube-apiserver-ci-3510-3-6-5-5611054123.novalocal\" (UID: \"a9e0c6b1a9e9a5a7bf5dc2aa132c70d9\") " pod="kube-system/kube-apiserver-ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:47.403737 kubelet[1973]: I1213 03:51:47.403687 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9e0c6b1a9e9a5a7bf5dc2aa132c70d9-k8s-certs\") pod \"kube-apiserver-ci-3510-3-6-5-5611054123.novalocal\" (UID: \"a9e0c6b1a9e9a5a7bf5dc2aa132c70d9\") " pod="kube-system/kube-apiserver-ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:47.403991 kubelet[1973]: I1213 03:51:47.403948 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2d95e236582cc2c90bec29d167685c23-ca-certs\") pod \"kube-controller-manager-ci-3510-3-6-5-5611054123.novalocal\" (UID: \"2d95e236582cc2c90bec29d167685c23\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:47.404385 kubelet[1973]: I1213 03:51:47.404337 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2d95e236582cc2c90bec29d167685c23-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-6-5-5611054123.novalocal\" (UID: \"2d95e236582cc2c90bec29d167685c23\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:47.404673 kubelet[1973]: I1213 03:51:47.404625 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9e0c6b1a9e9a5a7bf5dc2aa132c70d9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-6-5-5611054123.novalocal\" (UID: \"a9e0c6b1a9e9a5a7bf5dc2aa132c70d9\") " pod="kube-system/kube-apiserver-ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:47.404929 kubelet[1973]: I1213 03:51:47.404891 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2d95e236582cc2c90bec29d167685c23-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-6-5-5611054123.novalocal\" (UID: \"2d95e236582cc2c90bec29d167685c23\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:47.405297 kubelet[1973]: I1213 03:51:47.405250 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2d95e236582cc2c90bec29d167685c23-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-6-5-5611054123.novalocal\" (UID: \"2d95e236582cc2c90bec29d167685c23\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:47.638671 kubelet[1973]: W1213 03:51:47.638566 1973 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 03:51:47.638915 kubelet[1973]: E1213 03:51:47.638889 1973 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-6-5-5611054123.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:47.641501 kubelet[1973]: W1213 03:51:47.641467 1973 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 03:51:47.641692 kubelet[1973]: W1213 03:51:47.641646 1973 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 03:51:47.974967 kubelet[1973]: I1213 03:51:47.974922 1973 apiserver.go:52] "Watching apiserver" Dec 13 03:51:47.997907 kubelet[1973]: I1213 03:51:47.997860 1973 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 03:51:48.082582 kubelet[1973]: W1213 03:51:48.082542 1973 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 03:51:48.083180 kubelet[1973]: E1213 03:51:48.083098 1973 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-6-5-5611054123.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-6-5-5611054123.novalocal" Dec 13 03:51:48.142745 kubelet[1973]: I1213 03:51:48.142618 1973 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-6-5-5611054123.novalocal" podStartSLOduration=4.142536463 podStartE2EDuration="4.142536463s" podCreationTimestamp="2024-12-13 03:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 03:51:48.129632798 +0000 UTC m=+1.305801069" watchObservedRunningTime="2024-12-13 03:51:48.142536463 +0000 UTC m=+1.318704754" Dec 13 03:51:48.143480 kubelet[1973]: I1213 03:51:48.143377 1973 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-6-5-5611054123.novalocal" podStartSLOduration=1.14336045 podStartE2EDuration="1.14336045s" podCreationTimestamp="2024-12-13 03:51:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 03:51:48.143285169 +0000 UTC m=+1.319453400" watchObservedRunningTime="2024-12-13 03:51:48.14336045 +0000 UTC m=+1.319528721" Dec 13 03:51:48.156180 kubelet[1973]: I1213 03:51:48.156051 1973 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-6-5-5611054123.novalocal" podStartSLOduration=1.1560291839999999 podStartE2EDuration="1.156029184s" podCreationTimestamp="2024-12-13 03:51:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 03:51:48.155948502 +0000 UTC m=+1.332116753" watchObservedRunningTime="2024-12-13 03:51:48.156029184 +0000 UTC m=+1.332197455" Dec 13 03:51:48.438346 sudo[1987]: pam_unix(sudo:session): session closed for user root Dec 13 03:51:51.183828 sudo[1283]: pam_unix(sudo:session): session closed for user root Dec 13 03:51:51.447542 sshd[1270]: pam_unix(sshd:session): session closed for user core Dec 13 03:51:51.454465 systemd[1]: sshd@6-172.24.4.174:22-172.24.4.1:37344.service: Deactivated successfully. Dec 13 03:51:51.456093 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 03:51:51.456484 systemd[1]: session-7.scope: Consumed 7.595s CPU time. Dec 13 03:51:51.457816 systemd-logind[1131]: Session 7 logged out. Waiting for processes to exit. Dec 13 03:51:51.460512 systemd-logind[1131]: Removed session 7. Dec 13 03:51:59.304509 kubelet[1973]: I1213 03:51:59.304484 1973 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 03:51:59.305393 env[1144]: time="2024-12-13T03:51:59.305342612Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 03:51:59.306351 kubelet[1973]: I1213 03:51:59.306301 1973 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 03:51:59.976386 kubelet[1973]: I1213 03:51:59.976328 1973 topology_manager.go:215] "Topology Admit Handler" podUID="dfca7331-a462-4415-99a4-3b99a346d4a6" podNamespace="kube-system" podName="kube-proxy-mxt9g" Dec 13 03:51:59.978629 kubelet[1973]: I1213 03:51:59.978589 1973 topology_manager.go:215] "Topology Admit Handler" podUID="3d4a4f25-a9d3-47b4-8463-be8ab58137b7" podNamespace="kube-system" podName="cilium-bxfm7" Dec 13 03:51:59.984403 kubelet[1973]: I1213 03:51:59.984368 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-clustermesh-secrets\") pod \"cilium-bxfm7\" (UID: \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\") " pod="kube-system/cilium-bxfm7" Dec 13 03:51:59.984542 kubelet[1973]: I1213 03:51:59.984406 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-host-proc-sys-kernel\") pod \"cilium-bxfm7\" (UID: \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\") " pod="kube-system/cilium-bxfm7" Dec 13 03:51:59.984542 kubelet[1973]: I1213 03:51:59.984445 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-xtables-lock\") pod \"cilium-bxfm7\" (UID: \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\") " pod="kube-system/cilium-bxfm7" Dec 13 03:51:59.984542 kubelet[1973]: I1213 03:51:59.984466 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-host-proc-sys-net\") pod \"cilium-bxfm7\" (UID: \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\") " pod="kube-system/cilium-bxfm7" Dec 13 03:51:59.984542 kubelet[1973]: I1213 03:51:59.984486 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dfca7331-a462-4415-99a4-3b99a346d4a6-kube-proxy\") pod \"kube-proxy-mxt9g\" (UID: \"dfca7331-a462-4415-99a4-3b99a346d4a6\") " pod="kube-system/kube-proxy-mxt9g" Dec 13 03:51:59.984542 kubelet[1973]: I1213 03:51:59.984506 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfca7331-a462-4415-99a4-3b99a346d4a6-xtables-lock\") pod \"kube-proxy-mxt9g\" (UID: \"dfca7331-a462-4415-99a4-3b99a346d4a6\") " pod="kube-system/kube-proxy-mxt9g" Dec 13 03:51:59.984690 kubelet[1973]: I1213 03:51:59.984547 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-hostproc\") pod \"cilium-bxfm7\" (UID: \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\") " pod="kube-system/cilium-bxfm7" Dec 13 03:51:59.984690 kubelet[1973]: I1213 03:51:59.984567 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-hubble-tls\") pod \"cilium-bxfm7\" (UID: \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\") " pod="kube-system/cilium-bxfm7" Dec 13 03:51:59.984690 kubelet[1973]: I1213 03:51:59.984585 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-bpf-maps\") pod \"cilium-bxfm7\" (UID: \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\") " pod="kube-system/cilium-bxfm7" Dec 13 03:51:59.984690 kubelet[1973]: I1213 03:51:59.984604 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-cilium-config-path\") pod \"cilium-bxfm7\" (UID: \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\") " pod="kube-system/cilium-bxfm7" Dec 13 03:51:59.984690 kubelet[1973]: I1213 03:51:59.984623 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdfhv\" (UniqueName: \"kubernetes.io/projected/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-kube-api-access-kdfhv\") pod \"cilium-bxfm7\" (UID: \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\") " pod="kube-system/cilium-bxfm7" Dec 13 03:51:59.984690 kubelet[1973]: I1213 03:51:59.984645 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfca7331-a462-4415-99a4-3b99a346d4a6-lib-modules\") pod \"kube-proxy-mxt9g\" (UID: \"dfca7331-a462-4415-99a4-3b99a346d4a6\") " pod="kube-system/kube-proxy-mxt9g" Dec 13 03:51:59.984851 kubelet[1973]: I1213 03:51:59.984664 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9hxw\" (UniqueName: \"kubernetes.io/projected/dfca7331-a462-4415-99a4-3b99a346d4a6-kube-api-access-m9hxw\") pod \"kube-proxy-mxt9g\" (UID: \"dfca7331-a462-4415-99a4-3b99a346d4a6\") " pod="kube-system/kube-proxy-mxt9g" Dec 13 03:51:59.984851 kubelet[1973]: I1213 03:51:59.984681 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-cilium-cgroup\") pod \"cilium-bxfm7\" (UID: \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\") " pod="kube-system/cilium-bxfm7" Dec 13 03:51:59.984851 kubelet[1973]: I1213 03:51:59.984698 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-cni-path\") pod \"cilium-bxfm7\" (UID: \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\") " pod="kube-system/cilium-bxfm7" Dec 13 03:51:59.984851 kubelet[1973]: I1213 03:51:59.984716 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-cilium-run\") pod \"cilium-bxfm7\" (UID: \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\") " pod="kube-system/cilium-bxfm7" Dec 13 03:51:59.984851 kubelet[1973]: I1213 03:51:59.984734 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-etc-cni-netd\") pod \"cilium-bxfm7\" (UID: \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\") " pod="kube-system/cilium-bxfm7" Dec 13 03:51:59.984851 kubelet[1973]: I1213 03:51:59.984753 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-lib-modules\") pod \"cilium-bxfm7\" (UID: \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\") " pod="kube-system/cilium-bxfm7" Dec 13 03:51:59.988166 systemd[1]: Created slice kubepods-burstable-pod3d4a4f25_a9d3_47b4_8463_be8ab58137b7.slice. Dec 13 03:51:59.993828 systemd[1]: Created slice kubepods-besteffort-poddfca7331_a462_4415_99a4_3b99a346d4a6.slice. Dec 13 03:52:00.295555 env[1144]: time="2024-12-13T03:52:00.293061500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bxfm7,Uid:3d4a4f25-a9d3-47b4-8463-be8ab58137b7,Namespace:kube-system,Attempt:0,}" Dec 13 03:52:00.303056 env[1144]: time="2024-12-13T03:52:00.302756706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mxt9g,Uid:dfca7331-a462-4415-99a4-3b99a346d4a6,Namespace:kube-system,Attempt:0,}" Dec 13 03:52:00.335755 env[1144]: time="2024-12-13T03:52:00.335613609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:52:00.336079 env[1144]: time="2024-12-13T03:52:00.335796251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:52:00.336079 env[1144]: time="2024-12-13T03:52:00.335874819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:52:00.336465 env[1144]: time="2024-12-13T03:52:00.336423398Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/68dd1c897fe1c6b44e88e884776098b7ee37b4c9a716c3cf401bfa2321a108f0 pid=2063 runtime=io.containerd.runc.v2 Dec 13 03:52:00.336800 env[1144]: time="2024-12-13T03:52:00.336751764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:52:00.336918 env[1144]: time="2024-12-13T03:52:00.336895875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:52:00.337004 env[1144]: time="2024-12-13T03:52:00.336983339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:52:00.337411 env[1144]: time="2024-12-13T03:52:00.337369574Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/281eddd8ab192ea376739b585881d14cc29fc4d71c82723be72df7ecca1c79f2 pid=2065 runtime=io.containerd.runc.v2 Dec 13 03:52:00.366005 systemd[1]: Started cri-containerd-68dd1c897fe1c6b44e88e884776098b7ee37b4c9a716c3cf401bfa2321a108f0.scope. Dec 13 03:52:00.385483 systemd[1]: Started cri-containerd-281eddd8ab192ea376739b585881d14cc29fc4d71c82723be72df7ecca1c79f2.scope. Dec 13 03:52:00.452580 env[1144]: time="2024-12-13T03:52:00.452481457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bxfm7,Uid:3d4a4f25-a9d3-47b4-8463-be8ab58137b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"68dd1c897fe1c6b44e88e884776098b7ee37b4c9a716c3cf401bfa2321a108f0\"" Dec 13 03:52:00.465703 env[1144]: time="2024-12-13T03:52:00.465668125Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 03:52:00.466000 kubelet[1973]: I1213 03:52:00.465966 1973 topology_manager.go:215] "Topology Admit Handler" podUID="d50d7f2e-8b98-4bb0-b59f-15d8a8087d39" podNamespace="kube-system" podName="cilium-operator-599987898-2jvc6" Dec 13 03:52:00.477942 systemd[1]: Created slice kubepods-besteffort-podd50d7f2e_8b98_4bb0_b59f_15d8a8087d39.slice. Dec 13 03:52:00.487523 kubelet[1973]: I1213 03:52:00.487488 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d50d7f2e-8b98-4bb0-b59f-15d8a8087d39-cilium-config-path\") pod \"cilium-operator-599987898-2jvc6\" (UID: \"d50d7f2e-8b98-4bb0-b59f-15d8a8087d39\") " pod="kube-system/cilium-operator-599987898-2jvc6" Dec 13 03:52:00.487523 kubelet[1973]: I1213 03:52:00.487527 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmb55\" (UniqueName: \"kubernetes.io/projected/d50d7f2e-8b98-4bb0-b59f-15d8a8087d39-kube-api-access-rmb55\") pod \"cilium-operator-599987898-2jvc6\" (UID: \"d50d7f2e-8b98-4bb0-b59f-15d8a8087d39\") " pod="kube-system/cilium-operator-599987898-2jvc6" Dec 13 03:52:00.488915 env[1144]: time="2024-12-13T03:52:00.488877752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mxt9g,Uid:dfca7331-a462-4415-99a4-3b99a346d4a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"281eddd8ab192ea376739b585881d14cc29fc4d71c82723be72df7ecca1c79f2\"" Dec 13 03:52:00.492548 env[1144]: time="2024-12-13T03:52:00.492502304Z" level=info msg="CreateContainer within sandbox \"281eddd8ab192ea376739b585881d14cc29fc4d71c82723be72df7ecca1c79f2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 03:52:00.532653 env[1144]: time="2024-12-13T03:52:00.532608583Z" level=info msg="CreateContainer within sandbox \"281eddd8ab192ea376739b585881d14cc29fc4d71c82723be72df7ecca1c79f2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f9c159b5f63277e03bb3855be15b8c6029ea0a90218511f05a436835e7bf7ba0\"" Dec 13 03:52:00.533694 env[1144]: time="2024-12-13T03:52:00.533669825Z" level=info msg="StartContainer for \"f9c159b5f63277e03bb3855be15b8c6029ea0a90218511f05a436835e7bf7ba0\"" Dec 13 03:52:00.557019 systemd[1]: Started cri-containerd-f9c159b5f63277e03bb3855be15b8c6029ea0a90218511f05a436835e7bf7ba0.scope. Dec 13 03:52:00.632093 env[1144]: time="2024-12-13T03:52:00.632037222Z" level=info msg="StartContainer for \"f9c159b5f63277e03bb3855be15b8c6029ea0a90218511f05a436835e7bf7ba0\" returns successfully" Dec 13 03:52:00.784913 env[1144]: time="2024-12-13T03:52:00.784839049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-2jvc6,Uid:d50d7f2e-8b98-4bb0-b59f-15d8a8087d39,Namespace:kube-system,Attempt:0,}" Dec 13 03:52:00.839752 env[1144]: time="2024-12-13T03:52:00.838390570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:52:00.840328 env[1144]: time="2024-12-13T03:52:00.840206858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:52:00.840693 env[1144]: time="2024-12-13T03:52:00.840619643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:52:00.846597 env[1144]: time="2024-12-13T03:52:00.844682648Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b821f703eb56acbede8588a79f52aa2f1dd91fe6f3120921e99b0286d7ef3bcb pid=2173 runtime=io.containerd.runc.v2 Dec 13 03:52:00.862444 systemd[1]: Started cri-containerd-b821f703eb56acbede8588a79f52aa2f1dd91fe6f3120921e99b0286d7ef3bcb.scope. Dec 13 03:52:00.935342 env[1144]: time="2024-12-13T03:52:00.935285174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-2jvc6,Uid:d50d7f2e-8b98-4bb0-b59f-15d8a8087d39,Namespace:kube-system,Attempt:0,} returns sandbox id \"b821f703eb56acbede8588a79f52aa2f1dd91fe6f3120921e99b0286d7ef3bcb\"" Dec 13 03:52:01.132752 kubelet[1973]: I1213 03:52:01.132600 1973 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mxt9g" podStartSLOduration=2.13256889 podStartE2EDuration="2.13256889s" podCreationTimestamp="2024-12-13 03:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 03:52:01.132002868 +0000 UTC m=+14.308171089" watchObservedRunningTime="2024-12-13 03:52:01.13256889 +0000 UTC m=+14.308737141" Dec 13 03:52:10.560240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3229335225.mount: Deactivated successfully. Dec 13 03:52:16.666631 env[1144]: time="2024-12-13T03:52:16.666447337Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:52:16.671234 env[1144]: time="2024-12-13T03:52:16.671153734Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:52:16.676045 env[1144]: time="2024-12-13T03:52:16.675976881Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:52:16.679315 env[1144]: time="2024-12-13T03:52:16.679234134Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 03:52:16.687876 env[1144]: time="2024-12-13T03:52:16.687799053Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 03:52:16.690281 env[1144]: time="2024-12-13T03:52:16.690221959Z" level=info msg="CreateContainer within sandbox \"68dd1c897fe1c6b44e88e884776098b7ee37b4c9a716c3cf401bfa2321a108f0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 03:52:16.727981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2757771149.mount: Deactivated successfully. Dec 13 03:52:16.747805 env[1144]: time="2024-12-13T03:52:16.747730242Z" level=info msg="CreateContainer within sandbox \"68dd1c897fe1c6b44e88e884776098b7ee37b4c9a716c3cf401bfa2321a108f0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cfbfb075499cb0f548168f0ed8063691825ffe2c9c3680d16396454629e79620\"" Dec 13 03:52:16.750539 env[1144]: time="2024-12-13T03:52:16.750474490Z" level=info msg="StartContainer for \"cfbfb075499cb0f548168f0ed8063691825ffe2c9c3680d16396454629e79620\"" Dec 13 03:52:16.807256 systemd[1]: Started cri-containerd-cfbfb075499cb0f548168f0ed8063691825ffe2c9c3680d16396454629e79620.scope. Dec 13 03:52:16.845037 env[1144]: time="2024-12-13T03:52:16.844973347Z" level=info msg="StartContainer for \"cfbfb075499cb0f548168f0ed8063691825ffe2c9c3680d16396454629e79620\" returns successfully" Dec 13 03:52:16.853942 systemd[1]: cri-containerd-cfbfb075499cb0f548168f0ed8063691825ffe2c9c3680d16396454629e79620.scope: Deactivated successfully. Dec 13 03:52:17.427922 env[1144]: time="2024-12-13T03:52:17.427800004Z" level=info msg="shim disconnected" id=cfbfb075499cb0f548168f0ed8063691825ffe2c9c3680d16396454629e79620 Dec 13 03:52:17.427922 env[1144]: time="2024-12-13T03:52:17.427921082Z" level=warning msg="cleaning up after shim disconnected" id=cfbfb075499cb0f548168f0ed8063691825ffe2c9c3680d16396454629e79620 namespace=k8s.io Dec 13 03:52:17.428464 env[1144]: time="2024-12-13T03:52:17.427946911Z" level=info msg="cleaning up dead shim" Dec 13 03:52:17.449180 env[1144]: time="2024-12-13T03:52:17.449049286Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:52:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2375 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T03:52:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Dec 13 03:52:17.717276 systemd[1]: run-containerd-runc-k8s.io-cfbfb075499cb0f548168f0ed8063691825ffe2c9c3680d16396454629e79620-runc.xERwqE.mount: Deactivated successfully. Dec 13 03:52:17.717513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfbfb075499cb0f548168f0ed8063691825ffe2c9c3680d16396454629e79620-rootfs.mount: Deactivated successfully. Dec 13 03:52:18.186880 env[1144]: time="2024-12-13T03:52:18.186782045Z" level=info msg="CreateContainer within sandbox \"68dd1c897fe1c6b44e88e884776098b7ee37b4c9a716c3cf401bfa2321a108f0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 03:52:18.235436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1345318739.mount: Deactivated successfully. Dec 13 03:52:18.263782 env[1144]: time="2024-12-13T03:52:18.263678806Z" level=info msg="CreateContainer within sandbox \"68dd1c897fe1c6b44e88e884776098b7ee37b4c9a716c3cf401bfa2321a108f0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"62aa0cf4c5a37bac5551fd2486cd1c4aa1a87a807be557d6459ba2af05f09940\"" Dec 13 03:52:18.267803 env[1144]: time="2024-12-13T03:52:18.267698080Z" level=info msg="StartContainer for \"62aa0cf4c5a37bac5551fd2486cd1c4aa1a87a807be557d6459ba2af05f09940\"" Dec 13 03:52:18.318095 systemd[1]: Started cri-containerd-62aa0cf4c5a37bac5551fd2486cd1c4aa1a87a807be557d6459ba2af05f09940.scope. Dec 13 03:52:18.358707 env[1144]: time="2024-12-13T03:52:18.358642956Z" level=info msg="StartContainer for \"62aa0cf4c5a37bac5551fd2486cd1c4aa1a87a807be557d6459ba2af05f09940\" returns successfully" Dec 13 03:52:18.364889 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 03:52:18.365437 systemd[1]: Stopped systemd-sysctl.service. Dec 13 03:52:18.365646 systemd[1]: Stopping systemd-sysctl.service... Dec 13 03:52:18.369575 systemd[1]: Starting systemd-sysctl.service... Dec 13 03:52:18.372573 systemd[1]: cri-containerd-62aa0cf4c5a37bac5551fd2486cd1c4aa1a87a807be557d6459ba2af05f09940.scope: Deactivated successfully. Dec 13 03:52:18.421420 env[1144]: time="2024-12-13T03:52:18.421343113Z" level=info msg="shim disconnected" id=62aa0cf4c5a37bac5551fd2486cd1c4aa1a87a807be557d6459ba2af05f09940 Dec 13 03:52:18.421420 env[1144]: time="2024-12-13T03:52:18.421416502Z" level=warning msg="cleaning up after shim disconnected" id=62aa0cf4c5a37bac5551fd2486cd1c4aa1a87a807be557d6459ba2af05f09940 namespace=k8s.io Dec 13 03:52:18.421420 env[1144]: time="2024-12-13T03:52:18.421430448Z" level=info msg="cleaning up dead shim" Dec 13 03:52:18.430505 env[1144]: time="2024-12-13T03:52:18.430451643Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:52:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2439 runtime=io.containerd.runc.v2\n" Dec 13 03:52:18.455624 systemd[1]: Finished systemd-sysctl.service. Dec 13 03:52:18.717741 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62aa0cf4c5a37bac5551fd2486cd1c4aa1a87a807be557d6459ba2af05f09940-rootfs.mount: Deactivated successfully. Dec 13 03:52:19.097573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1985732353.mount: Deactivated successfully. Dec 13 03:52:19.199497 env[1144]: time="2024-12-13T03:52:19.199389728Z" level=info msg="CreateContainer within sandbox \"68dd1c897fe1c6b44e88e884776098b7ee37b4c9a716c3cf401bfa2321a108f0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 03:52:19.258288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount12419583.mount: Deactivated successfully. Dec 13 03:52:19.271095 env[1144]: time="2024-12-13T03:52:19.270928752Z" level=info msg="CreateContainer within sandbox \"68dd1c897fe1c6b44e88e884776098b7ee37b4c9a716c3cf401bfa2321a108f0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"91f2f2e081f32eebe622abe42af33627f2a1fc2e2238fcb84c819f76acce878e\"" Dec 13 03:52:19.273130 env[1144]: time="2024-12-13T03:52:19.271723685Z" level=info msg="StartContainer for \"91f2f2e081f32eebe622abe42af33627f2a1fc2e2238fcb84c819f76acce878e\"" Dec 13 03:52:19.298963 systemd[1]: Started cri-containerd-91f2f2e081f32eebe622abe42af33627f2a1fc2e2238fcb84c819f76acce878e.scope. Dec 13 03:52:19.347336 systemd[1]: cri-containerd-91f2f2e081f32eebe622abe42af33627f2a1fc2e2238fcb84c819f76acce878e.scope: Deactivated successfully. Dec 13 03:52:19.351343 env[1144]: time="2024-12-13T03:52:19.351259628Z" level=info msg="StartContainer for \"91f2f2e081f32eebe622abe42af33627f2a1fc2e2238fcb84c819f76acce878e\" returns successfully" Dec 13 03:52:19.415009 env[1144]: time="2024-12-13T03:52:19.414914734Z" level=info msg="shim disconnected" id=91f2f2e081f32eebe622abe42af33627f2a1fc2e2238fcb84c819f76acce878e Dec 13 03:52:19.415009 env[1144]: time="2024-12-13T03:52:19.414979626Z" level=warning msg="cleaning up after shim disconnected" id=91f2f2e081f32eebe622abe42af33627f2a1fc2e2238fcb84c819f76acce878e namespace=k8s.io Dec 13 03:52:19.415009 env[1144]: time="2024-12-13T03:52:19.414990306Z" level=info msg="cleaning up dead shim" Dec 13 03:52:19.428740 env[1144]: time="2024-12-13T03:52:19.428657943Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:52:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2498 runtime=io.containerd.runc.v2\n" Dec 13 03:52:20.214831 env[1144]: time="2024-12-13T03:52:20.214738259Z" level=info msg="CreateContainer within sandbox \"68dd1c897fe1c6b44e88e884776098b7ee37b4c9a716c3cf401bfa2321a108f0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 03:52:20.503024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3665374133.mount: Deactivated successfully. Dec 13 03:52:20.511222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3792870591.mount: Deactivated successfully. Dec 13 03:52:20.731752 env[1144]: time="2024-12-13T03:52:20.731665335Z" level=info msg="CreateContainer within sandbox \"68dd1c897fe1c6b44e88e884776098b7ee37b4c9a716c3cf401bfa2321a108f0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4caf3e6f2cb668ba34993afbe58f408840f3cb82ac109fb316bd8e49d06d332b\"" Dec 13 03:52:20.735664 env[1144]: time="2024-12-13T03:52:20.735592806Z" level=info msg="StartContainer for \"4caf3e6f2cb668ba34993afbe58f408840f3cb82ac109fb316bd8e49d06d332b\"" Dec 13 03:52:20.777938 systemd[1]: Started cri-containerd-4caf3e6f2cb668ba34993afbe58f408840f3cb82ac109fb316bd8e49d06d332b.scope. Dec 13 03:52:20.787918 systemd[1]: run-containerd-runc-k8s.io-4caf3e6f2cb668ba34993afbe58f408840f3cb82ac109fb316bd8e49d06d332b-runc.ii49Wk.mount: Deactivated successfully. Dec 13 03:52:20.829220 systemd[1]: cri-containerd-4caf3e6f2cb668ba34993afbe58f408840f3cb82ac109fb316bd8e49d06d332b.scope: Deactivated successfully. Dec 13 03:52:20.837260 env[1144]: time="2024-12-13T03:52:20.837167809Z" level=info msg="StartContainer for \"4caf3e6f2cb668ba34993afbe58f408840f3cb82ac109fb316bd8e49d06d332b\" returns successfully" Dec 13 03:52:20.875601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4caf3e6f2cb668ba34993afbe58f408840f3cb82ac109fb316bd8e49d06d332b-rootfs.mount: Deactivated successfully. Dec 13 03:52:21.330827 env[1144]: time="2024-12-13T03:52:21.330715573Z" level=info msg="shim disconnected" id=4caf3e6f2cb668ba34993afbe58f408840f3cb82ac109fb316bd8e49d06d332b Dec 13 03:52:21.330827 env[1144]: time="2024-12-13T03:52:21.330808578Z" level=warning msg="cleaning up after shim disconnected" id=4caf3e6f2cb668ba34993afbe58f408840f3cb82ac109fb316bd8e49d06d332b namespace=k8s.io Dec 13 03:52:21.330827 env[1144]: time="2024-12-13T03:52:21.330833696Z" level=info msg="cleaning up dead shim" Dec 13 03:52:21.370469 env[1144]: time="2024-12-13T03:52:21.370365575Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:52:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2553 runtime=io.containerd.runc.v2\n" Dec 13 03:52:21.398113 env[1144]: time="2024-12-13T03:52:21.398027638Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:52:21.402399 env[1144]: time="2024-12-13T03:52:21.402327628Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:52:21.406912 env[1144]: time="2024-12-13T03:52:21.406840840Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:52:21.408191 env[1144]: time="2024-12-13T03:52:21.408134852Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 03:52:21.416719 env[1144]: time="2024-12-13T03:52:21.415856382Z" level=info msg="CreateContainer within sandbox \"b821f703eb56acbede8588a79f52aa2f1dd91fe6f3120921e99b0286d7ef3bcb\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 03:52:21.471707 env[1144]: time="2024-12-13T03:52:21.471637097Z" level=info msg="CreateContainer within sandbox \"b821f703eb56acbede8588a79f52aa2f1dd91fe6f3120921e99b0286d7ef3bcb\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"870b7210f224d02d898e39a36d128b554bdb376c71fdaa0702b5c86e7a3d5591\"" Dec 13 03:52:21.474886 env[1144]: time="2024-12-13T03:52:21.473379190Z" level=info msg="StartContainer for \"870b7210f224d02d898e39a36d128b554bdb376c71fdaa0702b5c86e7a3d5591\"" Dec 13 03:52:21.511154 systemd[1]: Started cri-containerd-870b7210f224d02d898e39a36d128b554bdb376c71fdaa0702b5c86e7a3d5591.scope. Dec 13 03:52:21.608469 env[1144]: time="2024-12-13T03:52:21.608328380Z" level=info msg="StartContainer for \"870b7210f224d02d898e39a36d128b554bdb376c71fdaa0702b5c86e7a3d5591\" returns successfully" Dec 13 03:52:21.750303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount573668852.mount: Deactivated successfully. Dec 13 03:52:22.211343 env[1144]: time="2024-12-13T03:52:22.211307262Z" level=info msg="CreateContainer within sandbox \"68dd1c897fe1c6b44e88e884776098b7ee37b4c9a716c3cf401bfa2321a108f0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 03:52:22.245870 env[1144]: time="2024-12-13T03:52:22.245818242Z" level=info msg="CreateContainer within sandbox \"68dd1c897fe1c6b44e88e884776098b7ee37b4c9a716c3cf401bfa2321a108f0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4328b2baac1d80bcf3353f01c5a5101325c0c14ab44ca435ae8722a47d0e9891\"" Dec 13 03:52:22.246456 env[1144]: time="2024-12-13T03:52:22.246434179Z" level=info msg="StartContainer for \"4328b2baac1d80bcf3353f01c5a5101325c0c14ab44ca435ae8722a47d0e9891\"" Dec 13 03:52:22.271413 systemd[1]: Started cri-containerd-4328b2baac1d80bcf3353f01c5a5101325c0c14ab44ca435ae8722a47d0e9891.scope. Dec 13 03:52:22.371253 env[1144]: time="2024-12-13T03:52:22.371172468Z" level=info msg="StartContainer for \"4328b2baac1d80bcf3353f01c5a5101325c0c14ab44ca435ae8722a47d0e9891\" returns successfully" Dec 13 03:52:22.450755 kubelet[1973]: I1213 03:52:22.450698 1973 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-2jvc6" podStartSLOduration=1.9781623910000001 podStartE2EDuration="22.450677768s" podCreationTimestamp="2024-12-13 03:52:00 +0000 UTC" firstStartedPulling="2024-12-13 03:52:00.937927232 +0000 UTC m=+14.114095453" lastFinishedPulling="2024-12-13 03:52:21.410442579 +0000 UTC m=+34.586610830" observedRunningTime="2024-12-13 03:52:22.37394626 +0000 UTC m=+35.550114501" watchObservedRunningTime="2024-12-13 03:52:22.450677768 +0000 UTC m=+35.626845989" Dec 13 03:52:22.749389 systemd[1]: run-containerd-runc-k8s.io-4328b2baac1d80bcf3353f01c5a5101325c0c14ab44ca435ae8722a47d0e9891-runc.qEfByY.mount: Deactivated successfully. Dec 13 03:52:22.772946 kubelet[1973]: I1213 03:52:22.772787 1973 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 03:52:22.836291 kubelet[1973]: I1213 03:52:22.836258 1973 topology_manager.go:215] "Topology Admit Handler" podUID="9b9ecdef-0b36-4792-af41-11ea845c0eda" podNamespace="kube-system" podName="coredns-7db6d8ff4d-w2xjn" Dec 13 03:52:22.843350 systemd[1]: Created slice kubepods-burstable-pod9b9ecdef_0b36_4792_af41_11ea845c0eda.slice. Dec 13 03:52:22.849963 kubelet[1973]: I1213 03:52:22.849935 1973 topology_manager.go:215] "Topology Admit Handler" podUID="b9222844-9217-4c7c-a37c-cec33292caff" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jwj85" Dec 13 03:52:22.866899 kubelet[1973]: W1213 03:52:22.866822 1973 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510-3-6-5-5611054123.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-5-5611054123.novalocal' and this object Dec 13 03:52:22.867146 kubelet[1973]: E1213 03:52:22.867089 1973 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510-3-6-5-5611054123.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-5-5611054123.novalocal' and this object Dec 13 03:52:22.870050 systemd[1]: Created slice kubepods-burstable-podb9222844_9217_4c7c_a37c_cec33292caff.slice. Dec 13 03:52:22.944061 kubelet[1973]: I1213 03:52:22.943994 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b9ecdef-0b36-4792-af41-11ea845c0eda-config-volume\") pod \"coredns-7db6d8ff4d-w2xjn\" (UID: \"9b9ecdef-0b36-4792-af41-11ea845c0eda\") " pod="kube-system/coredns-7db6d8ff4d-w2xjn" Dec 13 03:52:22.944580 kubelet[1973]: I1213 03:52:22.944513 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9222844-9217-4c7c-a37c-cec33292caff-config-volume\") pod \"coredns-7db6d8ff4d-jwj85\" (UID: \"b9222844-9217-4c7c-a37c-cec33292caff\") " pod="kube-system/coredns-7db6d8ff4d-jwj85" Dec 13 03:52:22.944920 kubelet[1973]: I1213 03:52:22.944854 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfkxk\" (UniqueName: \"kubernetes.io/projected/9b9ecdef-0b36-4792-af41-11ea845c0eda-kube-api-access-bfkxk\") pod \"coredns-7db6d8ff4d-w2xjn\" (UID: \"9b9ecdef-0b36-4792-af41-11ea845c0eda\") " pod="kube-system/coredns-7db6d8ff4d-w2xjn" Dec 13 03:52:22.945202 kubelet[1973]: I1213 03:52:22.945166 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtr42\" (UniqueName: \"kubernetes.io/projected/b9222844-9217-4c7c-a37c-cec33292caff-kube-api-access-gtr42\") pod \"coredns-7db6d8ff4d-jwj85\" (UID: \"b9222844-9217-4c7c-a37c-cec33292caff\") " pod="kube-system/coredns-7db6d8ff4d-jwj85" Dec 13 03:52:24.070719 kubelet[1973]: E1213 03:52:24.070628 1973 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Dec 13 03:52:24.071362 kubelet[1973]: E1213 03:52:24.070828 1973 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9b9ecdef-0b36-4792-af41-11ea845c0eda-config-volume podName:9b9ecdef-0b36-4792-af41-11ea845c0eda nodeName:}" failed. No retries permitted until 2024-12-13 03:52:24.570774587 +0000 UTC m=+37.746942848 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9b9ecdef-0b36-4792-af41-11ea845c0eda-config-volume") pod "coredns-7db6d8ff4d-w2xjn" (UID: "9b9ecdef-0b36-4792-af41-11ea845c0eda") : failed to sync configmap cache: timed out waiting for the condition Dec 13 03:52:24.071362 kubelet[1973]: E1213 03:52:24.071325 1973 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Dec 13 03:52:24.071556 kubelet[1973]: E1213 03:52:24.071406 1973 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b9222844-9217-4c7c-a37c-cec33292caff-config-volume podName:b9222844-9217-4c7c-a37c-cec33292caff nodeName:}" failed. No retries permitted until 2024-12-13 03:52:24.571379173 +0000 UTC m=+37.747547444 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b9222844-9217-4c7c-a37c-cec33292caff-config-volume") pod "coredns-7db6d8ff4d-jwj85" (UID: "b9222844-9217-4c7c-a37c-cec33292caff") : failed to sync configmap cache: timed out waiting for the condition Dec 13 03:52:24.860605 env[1144]: time="2024-12-13T03:52:24.859724377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w2xjn,Uid:9b9ecdef-0b36-4792-af41-11ea845c0eda,Namespace:kube-system,Attempt:0,}" Dec 13 03:52:24.860605 env[1144]: time="2024-12-13T03:52:24.859858630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jwj85,Uid:b9222844-9217-4c7c-a37c-cec33292caff,Namespace:kube-system,Attempt:0,}" Dec 13 03:52:26.063245 systemd-networkd[976]: cilium_host: Link UP Dec 13 03:52:26.075940 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 03:52:26.076098 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 03:52:26.064713 systemd-networkd[976]: cilium_net: Link UP Dec 13 03:52:26.067655 systemd-networkd[976]: cilium_net: Gained carrier Dec 13 03:52:26.070973 systemd-networkd[976]: cilium_host: Gained carrier Dec 13 03:52:26.276625 systemd-networkd[976]: cilium_host: Gained IPv6LL Dec 13 03:52:26.428044 systemd-networkd[976]: cilium_vxlan: Link UP Dec 13 03:52:26.428063 systemd-networkd[976]: cilium_vxlan: Gained carrier Dec 13 03:52:26.429240 systemd-networkd[976]: cilium_net: Gained IPv6LL Dec 13 03:52:27.437213 kernel: NET: Registered PF_ALG protocol family Dec 13 03:52:28.100724 systemd-networkd[976]: cilium_vxlan: Gained IPv6LL Dec 13 03:52:28.687146 systemd-networkd[976]: lxc_health: Link UP Dec 13 03:52:28.712667 systemd-networkd[976]: lxc_health: Gained carrier Dec 13 03:52:28.713269 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 03:52:29.078022 systemd-networkd[976]: lxc2dfce8756269: Link UP Dec 13 03:52:29.085380 kernel: eth0: renamed from tmp2c46f Dec 13 03:52:29.098492 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2dfce8756269: link becomes ready Dec 13 03:52:29.096860 systemd-networkd[976]: lxc2dfce8756269: Gained carrier Dec 13 03:52:29.099176 systemd-networkd[976]: lxcec2c4eee49bc: Link UP Dec 13 03:52:29.113605 kernel: eth0: renamed from tmp017be Dec 13 03:52:29.125085 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcec2c4eee49bc: link becomes ready Dec 13 03:52:29.122928 systemd-networkd[976]: lxcec2c4eee49bc: Gained carrier Dec 13 03:52:30.033001 systemd-networkd[976]: lxc_health: Gained IPv6LL Dec 13 03:52:30.277250 systemd-networkd[976]: lxcec2c4eee49bc: Gained IPv6LL Dec 13 03:52:30.322749 kubelet[1973]: I1213 03:52:30.322613 1973 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bxfm7" podStartSLOduration=15.099780854 podStartE2EDuration="31.322596605s" podCreationTimestamp="2024-12-13 03:51:59 +0000 UTC" firstStartedPulling="2024-12-13 03:52:00.460591096 +0000 UTC m=+13.636759327" lastFinishedPulling="2024-12-13 03:52:16.683406807 +0000 UTC m=+29.859575078" observedRunningTime="2024-12-13 03:52:23.235342527 +0000 UTC m=+36.411510748" watchObservedRunningTime="2024-12-13 03:52:30.322596605 +0000 UTC m=+43.498764826" Dec 13 03:52:31.044363 systemd-networkd[976]: lxc2dfce8756269: Gained IPv6LL Dec 13 03:52:33.737711 env[1144]: time="2024-12-13T03:52:33.737618409Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:52:33.738193 env[1144]: time="2024-12-13T03:52:33.738156170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:52:33.738308 env[1144]: time="2024-12-13T03:52:33.738286254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:52:33.738754 env[1144]: time="2024-12-13T03:52:33.738685884Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/017be7e4fd8c3b3a3aaf769ec1c9b375a63bfd793b24131226a97aeb982e8526 pid=3143 runtime=io.containerd.runc.v2 Dec 13 03:52:33.762434 systemd[1]: Started cri-containerd-017be7e4fd8c3b3a3aaf769ec1c9b375a63bfd793b24131226a97aeb982e8526.scope. Dec 13 03:52:33.808404 env[1144]: time="2024-12-13T03:52:33.808300204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:52:33.808559 env[1144]: time="2024-12-13T03:52:33.808398769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:52:33.808559 env[1144]: time="2024-12-13T03:52:33.808432883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:52:33.808800 env[1144]: time="2024-12-13T03:52:33.808746953Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2c46f41f14d243c9d8100078dbf832f4634608fffd175a0a775e20e1ed95d95a pid=3179 runtime=io.containerd.runc.v2 Dec 13 03:52:33.835349 systemd[1]: Started cri-containerd-2c46f41f14d243c9d8100078dbf832f4634608fffd175a0a775e20e1ed95d95a.scope. Dec 13 03:52:33.852353 env[1144]: time="2024-12-13T03:52:33.852316675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jwj85,Uid:b9222844-9217-4c7c-a37c-cec33292caff,Namespace:kube-system,Attempt:0,} returns sandbox id \"017be7e4fd8c3b3a3aaf769ec1c9b375a63bfd793b24131226a97aeb982e8526\"" Dec 13 03:52:33.862743 env[1144]: time="2024-12-13T03:52:33.862516402Z" level=info msg="CreateContainer within sandbox \"017be7e4fd8c3b3a3aaf769ec1c9b375a63bfd793b24131226a97aeb982e8526\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 03:52:33.890869 env[1144]: time="2024-12-13T03:52:33.890828157Z" level=info msg="CreateContainer within sandbox \"017be7e4fd8c3b3a3aaf769ec1c9b375a63bfd793b24131226a97aeb982e8526\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b150765784586c6edf58fe0ffc17c6208e3f9c1ccd436d850b0dbbc1cc2b5e6d\"" Dec 13 03:52:33.892260 env[1144]: time="2024-12-13T03:52:33.892234208Z" level=info msg="StartContainer for \"b150765784586c6edf58fe0ffc17c6208e3f9c1ccd436d850b0dbbc1cc2b5e6d\"" Dec 13 03:52:33.901721 env[1144]: time="2024-12-13T03:52:33.901488279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w2xjn,Uid:9b9ecdef-0b36-4792-af41-11ea845c0eda,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c46f41f14d243c9d8100078dbf832f4634608fffd175a0a775e20e1ed95d95a\"" Dec 13 03:52:33.906550 env[1144]: time="2024-12-13T03:52:33.906518256Z" level=info msg="CreateContainer within sandbox \"2c46f41f14d243c9d8100078dbf832f4634608fffd175a0a775e20e1ed95d95a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 03:52:33.913714 systemd[1]: Started cri-containerd-b150765784586c6edf58fe0ffc17c6208e3f9c1ccd436d850b0dbbc1cc2b5e6d.scope. Dec 13 03:52:33.934366 env[1144]: time="2024-12-13T03:52:33.934314633Z" level=info msg="CreateContainer within sandbox \"2c46f41f14d243c9d8100078dbf832f4634608fffd175a0a775e20e1ed95d95a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3f55fac402322a41f836f3ab5451bca8e376b63ccb924277d317023a19b1ed3e\"" Dec 13 03:52:33.935206 env[1144]: time="2024-12-13T03:52:33.935178616Z" level=info msg="StartContainer for \"3f55fac402322a41f836f3ab5451bca8e376b63ccb924277d317023a19b1ed3e\"" Dec 13 03:52:33.966292 systemd[1]: Started cri-containerd-3f55fac402322a41f836f3ab5451bca8e376b63ccb924277d317023a19b1ed3e.scope. Dec 13 03:52:33.977786 env[1144]: time="2024-12-13T03:52:33.977720437Z" level=info msg="StartContainer for \"b150765784586c6edf58fe0ffc17c6208e3f9c1ccd436d850b0dbbc1cc2b5e6d\" returns successfully" Dec 13 03:52:34.008379 env[1144]: time="2024-12-13T03:52:34.008227165Z" level=info msg="StartContainer for \"3f55fac402322a41f836f3ab5451bca8e376b63ccb924277d317023a19b1ed3e\" returns successfully" Dec 13 03:52:34.455185 kubelet[1973]: I1213 03:52:34.455040 1973 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-w2xjn" podStartSLOduration=34.455005501 podStartE2EDuration="34.455005501s" podCreationTimestamp="2024-12-13 03:52:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 03:52:34.452904576 +0000 UTC m=+47.629072847" watchObservedRunningTime="2024-12-13 03:52:34.455005501 +0000 UTC m=+47.631173772" Dec 13 03:52:34.461223 kubelet[1973]: I1213 03:52:34.455275 1973 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jwj85" podStartSLOduration=34.455261733 podStartE2EDuration="34.455261733s" podCreationTimestamp="2024-12-13 03:52:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 03:52:34.384456942 +0000 UTC m=+47.560625213" watchObservedRunningTime="2024-12-13 03:52:34.455261733 +0000 UTC m=+47.631430034" Dec 13 03:53:07.419324 systemd[1]: Started sshd@7-172.24.4.174:22-172.24.4.1:34882.service. Dec 13 03:53:08.719268 sshd[3312]: Accepted publickey for core from 172.24.4.1 port 34882 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:53:08.734564 sshd[3312]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:53:08.753778 systemd-logind[1131]: New session 8 of user core. Dec 13 03:53:08.754345 systemd[1]: Started session-8.scope. Dec 13 03:53:09.739844 sshd[3312]: pam_unix(sshd:session): session closed for user core Dec 13 03:53:09.747850 systemd-logind[1131]: Session 8 logged out. Waiting for processes to exit. Dec 13 03:53:09.748350 systemd[1]: sshd@7-172.24.4.174:22-172.24.4.1:34882.service: Deactivated successfully. Dec 13 03:53:09.750484 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 03:53:09.755360 systemd-logind[1131]: Removed session 8. Dec 13 03:53:14.751822 systemd[1]: Started sshd@8-172.24.4.174:22-172.24.4.1:59772.service. Dec 13 03:53:15.963157 sshd[3324]: Accepted publickey for core from 172.24.4.1 port 59772 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:53:15.965743 sshd[3324]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:53:15.974771 systemd-logind[1131]: New session 9 of user core. Dec 13 03:53:15.977063 systemd[1]: Started session-9.scope. Dec 13 03:53:16.769849 sshd[3324]: pam_unix(sshd:session): session closed for user core Dec 13 03:53:16.775028 systemd[1]: sshd@8-172.24.4.174:22-172.24.4.1:59772.service: Deactivated successfully. Dec 13 03:53:16.776727 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 03:53:16.778008 systemd-logind[1131]: Session 9 logged out. Waiting for processes to exit. Dec 13 03:53:16.779908 systemd-logind[1131]: Removed session 9. Dec 13 03:53:21.781311 systemd[1]: Started sshd@9-172.24.4.174:22-172.24.4.1:59788.service. Dec 13 03:53:23.352989 sshd[3336]: Accepted publickey for core from 172.24.4.1 port 59788 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:53:23.354889 sshd[3336]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:53:23.361578 systemd[1]: Started session-10.scope. Dec 13 03:53:23.362010 systemd-logind[1131]: New session 10 of user core. Dec 13 03:53:25.060243 sshd[3336]: pam_unix(sshd:session): session closed for user core Dec 13 03:53:25.065416 systemd[1]: sshd@9-172.24.4.174:22-172.24.4.1:59788.service: Deactivated successfully. Dec 13 03:53:25.067010 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 03:53:25.068466 systemd-logind[1131]: Session 10 logged out. Waiting for processes to exit. Dec 13 03:53:25.070370 systemd-logind[1131]: Removed session 10. Dec 13 03:53:30.021612 systemd[1]: Started sshd@10-172.24.4.174:22-172.24.4.1:53328.service. Dec 13 03:53:31.342285 sshd[3349]: Accepted publickey for core from 172.24.4.1 port 53328 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:53:31.345396 sshd[3349]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:53:31.359868 systemd-logind[1131]: New session 11 of user core. Dec 13 03:53:31.365052 systemd[1]: Started session-11.scope. Dec 13 03:53:32.115029 sshd[3349]: pam_unix(sshd:session): session closed for user core Dec 13 03:53:32.122554 systemd[1]: sshd@10-172.24.4.174:22-172.24.4.1:53328.service: Deactivated successfully. Dec 13 03:53:32.124569 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 03:53:32.128003 systemd-logind[1131]: Session 11 logged out. Waiting for processes to exit. Dec 13 03:53:32.129514 systemd[1]: Started sshd@11-172.24.4.174:22-172.24.4.1:53338.service. Dec 13 03:53:32.132453 systemd-logind[1131]: Removed session 11. Dec 13 03:53:33.360690 sshd[3364]: Accepted publickey for core from 172.24.4.1 port 53338 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:53:33.363457 sshd[3364]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:53:33.373950 systemd-logind[1131]: New session 12 of user core. Dec 13 03:53:33.375672 systemd[1]: Started session-12.scope. Dec 13 03:53:34.282224 sshd[3364]: pam_unix(sshd:session): session closed for user core Dec 13 03:53:34.288473 systemd[1]: sshd@11-172.24.4.174:22-172.24.4.1:53338.service: Deactivated successfully. Dec 13 03:53:34.291762 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 03:53:34.294269 systemd-logind[1131]: Session 12 logged out. Waiting for processes to exit. Dec 13 03:53:34.298381 systemd[1]: Started sshd@12-172.24.4.174:22-172.24.4.1:53348.service. Dec 13 03:53:34.302339 systemd-logind[1131]: Removed session 12. Dec 13 03:53:35.575814 sshd[3374]: Accepted publickey for core from 172.24.4.1 port 53348 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:53:35.581018 sshd[3374]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:53:35.595093 systemd[1]: Started session-13.scope. Dec 13 03:53:35.597131 systemd-logind[1131]: New session 13 of user core. Dec 13 03:53:35.665967 update_engine[1132]: I1213 03:53:35.665771 1132 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 03:53:35.665967 update_engine[1132]: I1213 03:53:35.665887 1132 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 03:53:35.672258 update_engine[1132]: I1213 03:53:35.671860 1132 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 03:53:35.672847 update_engine[1132]: I1213 03:53:35.672796 1132 omaha_request_params.cc:62] Current group set to lts Dec 13 03:53:35.680626 update_engine[1132]: I1213 03:53:35.680568 1132 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 03:53:35.680626 update_engine[1132]: I1213 03:53:35.680601 1132 update_attempter.cc:643] Scheduling an action processor start. Dec 13 03:53:35.683620 update_engine[1132]: I1213 03:53:35.683563 1132 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 03:53:35.683748 update_engine[1132]: I1213 03:53:35.683652 1132 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 03:53:35.683889 update_engine[1132]: I1213 03:53:35.683791 1132 omaha_request_action.cc:270] Posting an Omaha request to disabled Dec 13 03:53:35.683889 update_engine[1132]: I1213 03:53:35.683806 1132 omaha_request_action.cc:271] Request: Dec 13 03:53:35.683889 update_engine[1132]: Dec 13 03:53:35.683889 update_engine[1132]: Dec 13 03:53:35.683889 update_engine[1132]: Dec 13 03:53:35.683889 update_engine[1132]: Dec 13 03:53:35.683889 update_engine[1132]: Dec 13 03:53:35.683889 update_engine[1132]: Dec 13 03:53:35.683889 update_engine[1132]: Dec 13 03:53:35.683889 update_engine[1132]: Dec 13 03:53:35.683889 update_engine[1132]: I1213 03:53:35.683815 1132 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 03:53:35.705188 update_engine[1132]: I1213 03:53:35.705088 1132 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 03:53:35.705433 update_engine[1132]: E1213 03:53:35.705386 1132 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 03:53:35.706165 update_engine[1132]: I1213 03:53:35.705565 1132 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 03:53:35.732763 locksmithd[1179]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 03:53:36.450415 sshd[3374]: pam_unix(sshd:session): session closed for user core Dec 13 03:53:36.455600 systemd[1]: sshd@12-172.24.4.174:22-172.24.4.1:53348.service: Deactivated successfully. Dec 13 03:53:36.457284 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 03:53:36.459474 systemd-logind[1131]: Session 13 logged out. Waiting for processes to exit. Dec 13 03:53:36.461919 systemd-logind[1131]: Removed session 13. Dec 13 03:53:41.458601 systemd[1]: Started sshd@13-172.24.4.174:22-172.24.4.1:36376.service. Dec 13 03:53:42.989248 sshd[3387]: Accepted publickey for core from 172.24.4.1 port 36376 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:53:42.991819 sshd[3387]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:53:43.003043 systemd-logind[1131]: New session 14 of user core. Dec 13 03:53:43.004445 systemd[1]: Started session-14.scope. Dec 13 03:53:43.752584 sshd[3387]: pam_unix(sshd:session): session closed for user core Dec 13 03:53:43.758522 systemd[1]: sshd@13-172.24.4.174:22-172.24.4.1:36376.service: Deactivated successfully. Dec 13 03:53:43.760623 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 03:53:43.762242 systemd-logind[1131]: Session 14 logged out. Waiting for processes to exit. Dec 13 03:53:43.764956 systemd-logind[1131]: Removed session 14. Dec 13 03:53:45.655254 update_engine[1132]: I1213 03:53:45.655010 1132 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 03:53:45.655876 update_engine[1132]: I1213 03:53:45.655390 1132 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 03:53:45.655876 update_engine[1132]: E1213 03:53:45.655539 1132 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 03:53:45.655876 update_engine[1132]: I1213 03:53:45.655647 1132 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 03:53:48.762027 systemd[1]: Started sshd@14-172.24.4.174:22-172.24.4.1:38932.service. Dec 13 03:53:50.013961 sshd[3401]: Accepted publickey for core from 172.24.4.1 port 38932 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:53:50.017474 sshd[3401]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:53:50.029173 systemd-logind[1131]: New session 15 of user core. Dec 13 03:53:50.031059 systemd[1]: Started session-15.scope. Dec 13 03:53:50.658891 sshd[3401]: pam_unix(sshd:session): session closed for user core Dec 13 03:53:50.667283 systemd[1]: Started sshd@15-172.24.4.174:22-172.24.4.1:38938.service. Dec 13 03:53:50.676191 systemd[1]: sshd@14-172.24.4.174:22-172.24.4.1:38932.service: Deactivated successfully. Dec 13 03:53:50.677831 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 03:53:50.679800 systemd-logind[1131]: Session 15 logged out. Waiting for processes to exit. Dec 13 03:53:50.682847 systemd-logind[1131]: Removed session 15. Dec 13 03:53:51.862359 sshd[3411]: Accepted publickey for core from 172.24.4.1 port 38938 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:53:51.866287 sshd[3411]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:53:51.878238 systemd-logind[1131]: New session 16 of user core. Dec 13 03:53:51.880564 systemd[1]: Started session-16.scope. Dec 13 03:53:53.203974 sshd[3411]: pam_unix(sshd:session): session closed for user core Dec 13 03:53:53.209921 systemd[1]: Started sshd@16-172.24.4.174:22-172.24.4.1:38942.service. Dec 13 03:53:53.216524 systemd[1]: sshd@15-172.24.4.174:22-172.24.4.1:38938.service: Deactivated successfully. Dec 13 03:53:53.218892 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 03:53:53.222491 systemd-logind[1131]: Session 16 logged out. Waiting for processes to exit. Dec 13 03:53:53.226016 systemd-logind[1131]: Removed session 16. Dec 13 03:53:54.718343 sshd[3420]: Accepted publickey for core from 172.24.4.1 port 38942 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:53:54.720159 sshd[3420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:53:54.727601 systemd-logind[1131]: New session 17 of user core. Dec 13 03:53:54.728232 systemd[1]: Started session-17.scope. Dec 13 03:53:55.655032 update_engine[1132]: I1213 03:53:55.654260 1132 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 03:53:55.655032 update_engine[1132]: I1213 03:53:55.654647 1132 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 03:53:55.655032 update_engine[1132]: E1213 03:53:55.654819 1132 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 03:53:55.655032 update_engine[1132]: I1213 03:53:55.654956 1132 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 03:53:58.410176 sshd[3420]: pam_unix(sshd:session): session closed for user core Dec 13 03:53:58.422416 systemd[1]: Started sshd@17-172.24.4.174:22-172.24.4.1:50108.service. Dec 13 03:53:58.427845 systemd[1]: sshd@16-172.24.4.174:22-172.24.4.1:38942.service: Deactivated successfully. Dec 13 03:53:58.429624 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 03:53:58.431295 systemd-logind[1131]: Session 17 logged out. Waiting for processes to exit. Dec 13 03:53:58.437310 systemd-logind[1131]: Removed session 17. Dec 13 03:53:59.685486 sshd[3436]: Accepted publickey for core from 172.24.4.1 port 50108 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:53:59.688270 sshd[3436]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:53:59.700727 systemd-logind[1131]: New session 18 of user core. Dec 13 03:53:59.701455 systemd[1]: Started session-18.scope. Dec 13 03:54:00.817270 sshd[3436]: pam_unix(sshd:session): session closed for user core Dec 13 03:54:00.824091 systemd[1]: sshd@17-172.24.4.174:22-172.24.4.1:50108.service: Deactivated successfully. Dec 13 03:54:00.826294 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 03:54:00.828198 systemd-logind[1131]: Session 18 logged out. Waiting for processes to exit. Dec 13 03:54:00.832283 systemd[1]: Started sshd@18-172.24.4.174:22-172.24.4.1:50122.service. Dec 13 03:54:00.840644 systemd-logind[1131]: Removed session 18. Dec 13 03:54:02.012073 sshd[3447]: Accepted publickey for core from 172.24.4.1 port 50122 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:54:02.014926 sshd[3447]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:54:02.024694 systemd-logind[1131]: New session 19 of user core. Dec 13 03:54:02.025566 systemd[1]: Started session-19.scope. Dec 13 03:54:02.760745 sshd[3447]: pam_unix(sshd:session): session closed for user core Dec 13 03:54:02.765387 systemd[1]: sshd@18-172.24.4.174:22-172.24.4.1:50122.service: Deactivated successfully. Dec 13 03:54:02.766214 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 03:54:02.767339 systemd-logind[1131]: Session 19 logged out. Waiting for processes to exit. Dec 13 03:54:02.768576 systemd-logind[1131]: Removed session 19. Dec 13 03:54:05.655265 update_engine[1132]: I1213 03:54:05.654624 1132 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 03:54:05.655265 update_engine[1132]: I1213 03:54:05.654937 1132 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 03:54:05.655265 update_engine[1132]: E1213 03:54:05.655071 1132 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 03:54:05.655265 update_engine[1132]: I1213 03:54:05.655195 1132 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 03:54:05.655265 update_engine[1132]: I1213 03:54:05.655206 1132 omaha_request_action.cc:621] Omaha request response: Dec 13 03:54:05.656528 update_engine[1132]: E1213 03:54:05.655292 1132 omaha_request_action.cc:640] Omaha request network transfer failed. Dec 13 03:54:05.656528 update_engine[1132]: I1213 03:54:05.656428 1132 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 03:54:05.656528 update_engine[1132]: I1213 03:54:05.656447 1132 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 03:54:05.656528 update_engine[1132]: I1213 03:54:05.656453 1132 update_attempter.cc:306] Processing Done. Dec 13 03:54:05.656528 update_engine[1132]: E1213 03:54:05.656468 1132 update_attempter.cc:619] Update failed. Dec 13 03:54:05.656528 update_engine[1132]: I1213 03:54:05.656476 1132 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 03:54:05.656528 update_engine[1132]: I1213 03:54:05.656481 1132 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 03:54:05.656528 update_engine[1132]: I1213 03:54:05.656486 1132 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 03:54:05.657010 update_engine[1132]: I1213 03:54:05.656591 1132 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 03:54:05.657010 update_engine[1132]: I1213 03:54:05.656620 1132 omaha_request_action.cc:270] Posting an Omaha request to disabled Dec 13 03:54:05.657010 update_engine[1132]: I1213 03:54:05.656627 1132 omaha_request_action.cc:271] Request: Dec 13 03:54:05.657010 update_engine[1132]: Dec 13 03:54:05.657010 update_engine[1132]: Dec 13 03:54:05.657010 update_engine[1132]: Dec 13 03:54:05.657010 update_engine[1132]: Dec 13 03:54:05.657010 update_engine[1132]: Dec 13 03:54:05.657010 update_engine[1132]: Dec 13 03:54:05.657010 update_engine[1132]: I1213 03:54:05.656633 1132 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 03:54:05.657010 update_engine[1132]: I1213 03:54:05.656821 1132 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 03:54:05.657010 update_engine[1132]: E1213 03:54:05.656932 1132 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 03:54:05.657010 update_engine[1132]: I1213 03:54:05.657012 1132 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 03:54:05.657961 update_engine[1132]: I1213 03:54:05.657022 1132 omaha_request_action.cc:621] Omaha request response: Dec 13 03:54:05.657961 update_engine[1132]: I1213 03:54:05.657028 1132 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 03:54:05.657961 update_engine[1132]: I1213 03:54:05.657033 1132 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 03:54:05.657961 update_engine[1132]: I1213 03:54:05.657038 1132 update_attempter.cc:306] Processing Done. Dec 13 03:54:05.657961 update_engine[1132]: I1213 03:54:05.657045 1132 update_attempter.cc:310] Error event sent. Dec 13 03:54:05.657961 update_engine[1132]: I1213 03:54:05.657932 1132 update_check_scheduler.cc:74] Next update check in 40m35s Dec 13 03:54:05.658355 locksmithd[1179]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 03:54:05.658355 locksmithd[1179]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 03:54:07.770170 systemd[1]: Started sshd@19-172.24.4.174:22-172.24.4.1:49250.service. Dec 13 03:54:09.112177 sshd[3462]: Accepted publickey for core from 172.24.4.1 port 49250 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:54:09.113801 sshd[3462]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:54:09.129322 systemd-logind[1131]: New session 20 of user core. Dec 13 03:54:09.129643 systemd[1]: Started session-20.scope. Dec 13 03:54:09.960202 sshd[3462]: pam_unix(sshd:session): session closed for user core Dec 13 03:54:09.966931 systemd[1]: sshd@19-172.24.4.174:22-172.24.4.1:49250.service: Deactivated successfully. Dec 13 03:54:09.969286 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 03:54:09.971445 systemd-logind[1131]: Session 20 logged out. Waiting for processes to exit. Dec 13 03:54:09.975015 systemd-logind[1131]: Removed session 20. Dec 13 03:54:14.969239 systemd[1]: Started sshd@20-172.24.4.174:22-172.24.4.1:50354.service. Dec 13 03:54:16.330399 sshd[3478]: Accepted publickey for core from 172.24.4.1 port 50354 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:54:16.333528 sshd[3478]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:54:16.343925 systemd-logind[1131]: New session 21 of user core. Dec 13 03:54:16.344903 systemd[1]: Started session-21.scope. Dec 13 03:54:17.846352 sshd[3478]: pam_unix(sshd:session): session closed for user core Dec 13 03:54:17.851747 systemd-logind[1131]: Session 21 logged out. Waiting for processes to exit. Dec 13 03:54:17.852544 systemd[1]: sshd@20-172.24.4.174:22-172.24.4.1:50354.service: Deactivated successfully. Dec 13 03:54:17.854237 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 03:54:17.856052 systemd-logind[1131]: Removed session 21. Dec 13 03:54:22.857419 systemd[1]: Started sshd@21-172.24.4.174:22-172.24.4.1:50364.service. Dec 13 03:54:24.099010 sshd[3490]: Accepted publickey for core from 172.24.4.1 port 50364 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:54:24.102521 sshd[3490]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:54:24.114285 systemd[1]: Started session-22.scope. Dec 13 03:54:24.115145 systemd-logind[1131]: New session 22 of user core. Dec 13 03:54:24.916814 sshd[3490]: pam_unix(sshd:session): session closed for user core Dec 13 03:54:24.927924 systemd[1]: Started sshd@22-172.24.4.174:22-172.24.4.1:45720.service. Dec 13 03:54:24.929522 systemd[1]: sshd@21-172.24.4.174:22-172.24.4.1:50364.service: Deactivated successfully. Dec 13 03:54:24.931339 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 03:54:24.934405 systemd-logind[1131]: Session 22 logged out. Waiting for processes to exit. Dec 13 03:54:24.936943 systemd-logind[1131]: Removed session 22. Dec 13 03:54:26.183605 sshd[3501]: Accepted publickey for core from 172.24.4.1 port 45720 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:54:26.186250 sshd[3501]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:54:26.197245 systemd-logind[1131]: New session 23 of user core. Dec 13 03:54:26.198313 systemd[1]: Started session-23.scope. Dec 13 03:54:29.026673 systemd[1]: run-containerd-runc-k8s.io-4328b2baac1d80bcf3353f01c5a5101325c0c14ab44ca435ae8722a47d0e9891-runc.MfFfqs.mount: Deactivated successfully. Dec 13 03:54:29.030287 env[1144]: time="2024-12-13T03:54:29.030245370Z" level=info msg="StopContainer for \"870b7210f224d02d898e39a36d128b554bdb376c71fdaa0702b5c86e7a3d5591\" with timeout 30 (s)" Dec 13 03:54:29.032551 env[1144]: time="2024-12-13T03:54:29.031567254Z" level=info msg="Stop container \"870b7210f224d02d898e39a36d128b554bdb376c71fdaa0702b5c86e7a3d5591\" with signal terminated" Dec 13 03:54:29.048840 systemd[1]: cri-containerd-870b7210f224d02d898e39a36d128b554bdb376c71fdaa0702b5c86e7a3d5591.scope: Deactivated successfully. Dec 13 03:54:29.069742 env[1144]: time="2024-12-13T03:54:29.069528759Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 03:54:29.076641 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-870b7210f224d02d898e39a36d128b554bdb376c71fdaa0702b5c86e7a3d5591-rootfs.mount: Deactivated successfully. Dec 13 03:54:29.081603 env[1144]: time="2024-12-13T03:54:29.081556367Z" level=info msg="StopContainer for \"4328b2baac1d80bcf3353f01c5a5101325c0c14ab44ca435ae8722a47d0e9891\" with timeout 2 (s)" Dec 13 03:54:29.081933 env[1144]: time="2024-12-13T03:54:29.081905282Z" level=info msg="Stop container \"4328b2baac1d80bcf3353f01c5a5101325c0c14ab44ca435ae8722a47d0e9891\" with signal terminated" Dec 13 03:54:29.086829 env[1144]: time="2024-12-13T03:54:29.086771790Z" level=info msg="shim disconnected" id=870b7210f224d02d898e39a36d128b554bdb376c71fdaa0702b5c86e7a3d5591 Dec 13 03:54:29.086829 env[1144]: time="2024-12-13T03:54:29.086821664Z" level=warning msg="cleaning up after shim disconnected" id=870b7210f224d02d898e39a36d128b554bdb376c71fdaa0702b5c86e7a3d5591 namespace=k8s.io Dec 13 03:54:29.086829 env[1144]: time="2024-12-13T03:54:29.086832665Z" level=info msg="cleaning up dead shim" Dec 13 03:54:29.091242 systemd-networkd[976]: lxc_health: Link DOWN Dec 13 03:54:29.091250 systemd-networkd[976]: lxc_health: Lost carrier Dec 13 03:54:29.118190 env[1144]: time="2024-12-13T03:54:29.117968289Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:54:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3553 runtime=io.containerd.runc.v2\n" Dec 13 03:54:29.122379 systemd[1]: cri-containerd-4328b2baac1d80bcf3353f01c5a5101325c0c14ab44ca435ae8722a47d0e9891.scope: Deactivated successfully. Dec 13 03:54:29.125079 env[1144]: time="2024-12-13T03:54:29.122621185Z" level=info msg="StopContainer for \"870b7210f224d02d898e39a36d128b554bdb376c71fdaa0702b5c86e7a3d5591\" returns successfully" Dec 13 03:54:29.125079 env[1144]: time="2024-12-13T03:54:29.123336549Z" level=info msg="StopPodSandbox for \"b821f703eb56acbede8588a79f52aa2f1dd91fe6f3120921e99b0286d7ef3bcb\"" Dec 13 03:54:29.125079 env[1144]: time="2024-12-13T03:54:29.123407512Z" level=info msg="Container to stop \"870b7210f224d02d898e39a36d128b554bdb376c71fdaa0702b5c86e7a3d5591\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 03:54:29.122628 systemd[1]: cri-containerd-4328b2baac1d80bcf3353f01c5a5101325c0c14ab44ca435ae8722a47d0e9891.scope: Consumed 8.927s CPU time. Dec 13 03:54:29.129032 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b821f703eb56acbede8588a79f52aa2f1dd91fe6f3120921e99b0286d7ef3bcb-shm.mount: Deactivated successfully. Dec 13 03:54:29.138953 systemd[1]: cri-containerd-b821f703eb56acbede8588a79f52aa2f1dd91fe6f3120921e99b0286d7ef3bcb.scope: Deactivated successfully. Dec 13 03:54:29.170648 env[1144]: time="2024-12-13T03:54:29.170590110Z" level=info msg="shim disconnected" id=4328b2baac1d80bcf3353f01c5a5101325c0c14ab44ca435ae8722a47d0e9891 Dec 13 03:54:29.170952 env[1144]: time="2024-12-13T03:54:29.170920381Z" level=warning msg="cleaning up after shim disconnected" id=4328b2baac1d80bcf3353f01c5a5101325c0c14ab44ca435ae8722a47d0e9891 namespace=k8s.io Dec 13 03:54:29.171055 env[1144]: time="2024-12-13T03:54:29.171038974Z" level=info msg="cleaning up dead shim" Dec 13 03:54:29.181457 env[1144]: time="2024-12-13T03:54:29.181423182Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:54:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3599 runtime=io.containerd.runc.v2\n" Dec 13 03:54:29.183269 env[1144]: time="2024-12-13T03:54:29.183228977Z" level=info msg="shim disconnected" id=b821f703eb56acbede8588a79f52aa2f1dd91fe6f3120921e99b0286d7ef3bcb Dec 13 03:54:29.183766 env[1144]: time="2024-12-13T03:54:29.183746349Z" level=warning msg="cleaning up after shim disconnected" id=b821f703eb56acbede8588a79f52aa2f1dd91fe6f3120921e99b0286d7ef3bcb namespace=k8s.io Dec 13 03:54:29.183849 env[1144]: time="2024-12-13T03:54:29.183833072Z" level=info msg="cleaning up dead shim" Dec 13 03:54:29.186562 env[1144]: time="2024-12-13T03:54:29.186515174Z" level=info msg="StopContainer for \"4328b2baac1d80bcf3353f01c5a5101325c0c14ab44ca435ae8722a47d0e9891\" returns successfully" Dec 13 03:54:29.187419 env[1144]: time="2024-12-13T03:54:29.187392423Z" level=info msg="StopPodSandbox for \"68dd1c897fe1c6b44e88e884776098b7ee37b4c9a716c3cf401bfa2321a108f0\"" Dec 13 03:54:29.187560 env[1144]: time="2024-12-13T03:54:29.187536634Z" level=info msg="Container to stop \"4caf3e6f2cb668ba34993afbe58f408840f3cb82ac109fb316bd8e49d06d332b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 03:54:29.187652 env[1144]: time="2024-12-13T03:54:29.187630129Z" level=info msg="Container to stop \"4328b2baac1d80bcf3353f01c5a5101325c0c14ab44ca435ae8722a47d0e9891\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 03:54:29.187748 env[1144]: time="2024-12-13T03:54:29.187727172Z" level=info msg="Container to stop \"cfbfb075499cb0f548168f0ed8063691825ffe2c9c3680d16396454629e79620\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 03:54:29.187829 env[1144]: time="2024-12-13T03:54:29.187810950Z" level=info msg="Container to stop \"62aa0cf4c5a37bac5551fd2486cd1c4aa1a87a807be557d6459ba2af05f09940\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 03:54:29.187904 env[1144]: time="2024-12-13T03:54:29.187885159Z" level=info msg="Container to stop \"91f2f2e081f32eebe622abe42af33627f2a1fc2e2238fcb84c819f76acce878e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 03:54:29.193412 env[1144]: time="2024-12-13T03:54:29.193367674Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:54:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3612 runtime=io.containerd.runc.v2\n" Dec 13 03:54:29.193965 env[1144]: time="2024-12-13T03:54:29.193937475Z" level=info msg="TearDown network for sandbox \"b821f703eb56acbede8588a79f52aa2f1dd91fe6f3120921e99b0286d7ef3bcb\" successfully" Dec 13 03:54:29.194751 env[1144]: time="2024-12-13T03:54:29.194726529Z" level=info msg="StopPodSandbox for \"b821f703eb56acbede8588a79f52aa2f1dd91fe6f3120921e99b0286d7ef3bcb\" returns successfully" Dec 13 03:54:29.202790 systemd[1]: cri-containerd-68dd1c897fe1c6b44e88e884776098b7ee37b4c9a716c3cf401bfa2321a108f0.scope: Deactivated successfully. Dec 13 03:54:29.245252 env[1144]: time="2024-12-13T03:54:29.245083673Z" level=info msg="shim disconnected" id=68dd1c897fe1c6b44e88e884776098b7ee37b4c9a716c3cf401bfa2321a108f0 Dec 13 03:54:29.245463 env[1144]: time="2024-12-13T03:54:29.245444551Z" level=warning msg="cleaning up after shim disconnected" id=68dd1c897fe1c6b44e88e884776098b7ee37b4c9a716c3cf401bfa2321a108f0 namespace=k8s.io Dec 13 03:54:29.245549 env[1144]: time="2024-12-13T03:54:29.245534831Z" level=info msg="cleaning up dead shim" Dec 13 03:54:29.254547 env[1144]: time="2024-12-13T03:54:29.254499061Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:54:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3643 runtime=io.containerd.runc.v2\n" Dec 13 03:54:29.255031 env[1144]: time="2024-12-13T03:54:29.255003499Z" level=info msg="TearDown network for sandbox \"68dd1c897fe1c6b44e88e884776098b7ee37b4c9a716c3cf401bfa2321a108f0\" successfully" Dec 13 03:54:29.255149 env[1144]: time="2024-12-13T03:54:29.255116632Z" level=info msg="StopPodSandbox for \"68dd1c897fe1c6b44e88e884776098b7ee37b4c9a716c3cf401bfa2321a108f0\" returns successfully" Dec 13 03:54:29.331699 kubelet[1973]: I1213 03:54:29.329486 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-cni-path\") pod \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\" (UID: \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\") " Dec 13 03:54:29.332132 kubelet[1973]: I1213 03:54:29.332097 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-bpf-maps\") pod \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\" (UID: \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\") " Dec 13 03:54:29.332238 kubelet[1973]: I1213 03:54:29.332221 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-cilium-config-path\") pod \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\" (UID: \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\") " Dec 13 03:54:29.332338 kubelet[1973]: I1213 03:54:29.332324 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-etc-cni-netd\") pod \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\" (UID: \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\") " Dec 13 03:54:29.332436 kubelet[1973]: I1213 03:54:29.332414 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdfhv\" (UniqueName: \"kubernetes.io/projected/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-kube-api-access-kdfhv\") pod \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\" (UID: \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\") " Dec 13 03:54:29.332537 kubelet[1973]: I1213 03:54:29.332523 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-host-proc-sys-net\") pod \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\" (UID: \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\") " Dec 13 03:54:29.332624 kubelet[1973]: I1213 03:54:29.332610 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-lib-modules\") pod \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\" (UID: \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\") " Dec 13 03:54:29.332711 kubelet[1973]: I1213 03:54:29.332697 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-hubble-tls\") pod \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\" (UID: \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\") " Dec 13 03:54:29.332792 kubelet[1973]: I1213 03:54:29.332779 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-cilium-cgroup\") pod \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\" (UID: \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\") " Dec 13 03:54:29.332879 kubelet[1973]: I1213 03:54:29.332866 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-clustermesh-secrets\") pod \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\" (UID: \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\") " Dec 13 03:54:29.332965 kubelet[1973]: I1213 03:54:29.332951 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-hostproc\") pod \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\" (UID: \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\") " Dec 13 03:54:29.333048 kubelet[1973]: I1213 03:54:29.333035 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d50d7f2e-8b98-4bb0-b59f-15d8a8087d39-cilium-config-path\") pod \"d50d7f2e-8b98-4bb0-b59f-15d8a8087d39\" (UID: \"d50d7f2e-8b98-4bb0-b59f-15d8a8087d39\") " Dec 13 03:54:29.333164 kubelet[1973]: I1213 03:54:29.333149 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmb55\" (UniqueName: \"kubernetes.io/projected/d50d7f2e-8b98-4bb0-b59f-15d8a8087d39-kube-api-access-rmb55\") pod \"d50d7f2e-8b98-4bb0-b59f-15d8a8087d39\" (UID: \"d50d7f2e-8b98-4bb0-b59f-15d8a8087d39\") " Dec 13 03:54:29.333258 kubelet[1973]: I1213 03:54:29.333238 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-cilium-run\") pod \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\" (UID: \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\") " Dec 13 03:54:29.333346 kubelet[1973]: I1213 03:54:29.333331 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-host-proc-sys-kernel\") pod \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\" (UID: \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\") " Dec 13 03:54:29.333429 kubelet[1973]: I1213 03:54:29.333416 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-xtables-lock\") pod \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\" (UID: \"3d4a4f25-a9d3-47b4-8463-be8ab58137b7\") " Dec 13 03:54:29.341001 kubelet[1973]: I1213 03:54:29.329476 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-cni-path" (OuterVolumeSpecName: "cni-path") pod "3d4a4f25-a9d3-47b4-8463-be8ab58137b7" (UID: "3d4a4f25-a9d3-47b4-8463-be8ab58137b7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:29.341139 kubelet[1973]: I1213 03:54:29.332669 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3d4a4f25-a9d3-47b4-8463-be8ab58137b7" (UID: "3d4a4f25-a9d3-47b4-8463-be8ab58137b7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:29.341214 kubelet[1973]: I1213 03:54:29.332706 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3d4a4f25-a9d3-47b4-8463-be8ab58137b7" (UID: "3d4a4f25-a9d3-47b4-8463-be8ab58137b7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:29.341274 kubelet[1973]: I1213 03:54:29.333522 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3d4a4f25-a9d3-47b4-8463-be8ab58137b7" (UID: "3d4a4f25-a9d3-47b4-8463-be8ab58137b7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:29.341339 kubelet[1973]: I1213 03:54:29.340488 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3d4a4f25-a9d3-47b4-8463-be8ab58137b7" (UID: "3d4a4f25-a9d3-47b4-8463-be8ab58137b7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:54:29.341466 kubelet[1973]: I1213 03:54:29.340879 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-hostproc" (OuterVolumeSpecName: "hostproc") pod "3d4a4f25-a9d3-47b4-8463-be8ab58137b7" (UID: "3d4a4f25-a9d3-47b4-8463-be8ab58137b7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:29.341535 kubelet[1973]: I1213 03:54:29.340963 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3d4a4f25-a9d3-47b4-8463-be8ab58137b7" (UID: "3d4a4f25-a9d3-47b4-8463-be8ab58137b7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:29.341841 kubelet[1973]: I1213 03:54:29.340984 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3d4a4f25-a9d3-47b4-8463-be8ab58137b7" (UID: "3d4a4f25-a9d3-47b4-8463-be8ab58137b7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:29.341942 kubelet[1973]: I1213 03:54:29.341926 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3d4a4f25-a9d3-47b4-8463-be8ab58137b7" (UID: "3d4a4f25-a9d3-47b4-8463-be8ab58137b7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:29.344048 kubelet[1973]: I1213 03:54:29.344027 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d50d7f2e-8b98-4bb0-b59f-15d8a8087d39-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d50d7f2e-8b98-4bb0-b59f-15d8a8087d39" (UID: "d50d7f2e-8b98-4bb0-b59f-15d8a8087d39"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:54:29.344833 kubelet[1973]: I1213 03:54:29.344791 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3d4a4f25-a9d3-47b4-8463-be8ab58137b7" (UID: "3d4a4f25-a9d3-47b4-8463-be8ab58137b7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:29.344895 kubelet[1973]: I1213 03:54:29.344840 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3d4a4f25-a9d3-47b4-8463-be8ab58137b7" (UID: "3d4a4f25-a9d3-47b4-8463-be8ab58137b7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:29.348747 kubelet[1973]: I1213 03:54:29.348716 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3d4a4f25-a9d3-47b4-8463-be8ab58137b7" (UID: "3d4a4f25-a9d3-47b4-8463-be8ab58137b7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:54:29.350703 kubelet[1973]: I1213 03:54:29.350631 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-kube-api-access-kdfhv" (OuterVolumeSpecName: "kube-api-access-kdfhv") pod "3d4a4f25-a9d3-47b4-8463-be8ab58137b7" (UID: "3d4a4f25-a9d3-47b4-8463-be8ab58137b7"). InnerVolumeSpecName "kube-api-access-kdfhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:54:29.350766 kubelet[1973]: I1213 03:54:29.350710 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3d4a4f25-a9d3-47b4-8463-be8ab58137b7" (UID: "3d4a4f25-a9d3-47b4-8463-be8ab58137b7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:54:29.350766 kubelet[1973]: I1213 03:54:29.350745 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d50d7f2e-8b98-4bb0-b59f-15d8a8087d39-kube-api-access-rmb55" (OuterVolumeSpecName: "kube-api-access-rmb55") pod "d50d7f2e-8b98-4bb0-b59f-15d8a8087d39" (UID: "d50d7f2e-8b98-4bb0-b59f-15d8a8087d39"). InnerVolumeSpecName "kube-api-access-rmb55". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:54:29.443375 kubelet[1973]: I1213 03:54:29.443323 1973 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rmb55\" (UniqueName: \"kubernetes.io/projected/d50d7f2e-8b98-4bb0-b59f-15d8a8087d39-kube-api-access-rmb55\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:29.443678 kubelet[1973]: I1213 03:54:29.443644 1973 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d50d7f2e-8b98-4bb0-b59f-15d8a8087d39-cilium-config-path\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:29.443871 kubelet[1973]: I1213 03:54:29.443841 1973 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-host-proc-sys-kernel\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:29.444041 kubelet[1973]: I1213 03:54:29.444013 1973 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-xtables-lock\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:29.445269 kubelet[1973]: I1213 03:54:29.445234 1973 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-cilium-run\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:29.445638 kubelet[1973]: I1213 03:54:29.445607 1973 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-cni-path\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:29.445985 kubelet[1973]: I1213 03:54:29.445954 1973 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-bpf-maps\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:29.446282 kubelet[1973]: I1213 03:54:29.446251 1973 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-cilium-config-path\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:29.446487 kubelet[1973]: I1213 03:54:29.446458 1973 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-etc-cni-netd\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:29.446642 kubelet[1973]: I1213 03:54:29.446615 1973 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-kdfhv\" (UniqueName: \"kubernetes.io/projected/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-kube-api-access-kdfhv\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:29.446800 kubelet[1973]: I1213 03:54:29.446773 1973 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-lib-modules\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:29.446960 kubelet[1973]: I1213 03:54:29.446933 1973 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-host-proc-sys-net\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:29.447170 kubelet[1973]: I1213 03:54:29.447138 1973 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-clustermesh-secrets\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:29.447355 kubelet[1973]: I1213 03:54:29.447327 1973 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-hostproc\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:29.447513 kubelet[1973]: I1213 03:54:29.447488 1973 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-hubble-tls\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:29.448096 kubelet[1973]: I1213 03:54:29.447659 1973 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d4a4f25-a9d3-47b4-8463-be8ab58137b7-cilium-cgroup\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:29.655164 systemd[1]: Removed slice kubepods-burstable-pod3d4a4f25_a9d3_47b4_8463_be8ab58137b7.slice. Dec 13 03:54:29.655384 systemd[1]: kubepods-burstable-pod3d4a4f25_a9d3_47b4_8463_be8ab58137b7.slice: Consumed 9.036s CPU time. Dec 13 03:54:29.682835 kubelet[1973]: I1213 03:54:29.682747 1973 scope.go:117] "RemoveContainer" containerID="4328b2baac1d80bcf3353f01c5a5101325c0c14ab44ca435ae8722a47d0e9891" Dec 13 03:54:29.685822 env[1144]: time="2024-12-13T03:54:29.685441587Z" level=info msg="RemoveContainer for \"4328b2baac1d80bcf3353f01c5a5101325c0c14ab44ca435ae8722a47d0e9891\"" Dec 13 03:54:29.703493 systemd[1]: Removed slice kubepods-besteffort-podd50d7f2e_8b98_4bb0_b59f_15d8a8087d39.slice. Dec 13 03:54:29.711279 env[1144]: time="2024-12-13T03:54:29.709281048Z" level=info msg="RemoveContainer for \"4328b2baac1d80bcf3353f01c5a5101325c0c14ab44ca435ae8722a47d0e9891\" returns successfully" Dec 13 03:54:29.711535 kubelet[1973]: I1213 03:54:29.710193 1973 scope.go:117] "RemoveContainer" containerID="4caf3e6f2cb668ba34993afbe58f408840f3cb82ac109fb316bd8e49d06d332b" Dec 13 03:54:29.717581 env[1144]: time="2024-12-13T03:54:29.717487923Z" level=info msg="RemoveContainer for \"4caf3e6f2cb668ba34993afbe58f408840f3cb82ac109fb316bd8e49d06d332b\"" Dec 13 03:54:29.725738 env[1144]: time="2024-12-13T03:54:29.725653191Z" level=info msg="RemoveContainer for \"4caf3e6f2cb668ba34993afbe58f408840f3cb82ac109fb316bd8e49d06d332b\" returns successfully" Dec 13 03:54:29.726154 kubelet[1973]: I1213 03:54:29.726072 1973 scope.go:117] "RemoveContainer" containerID="91f2f2e081f32eebe622abe42af33627f2a1fc2e2238fcb84c819f76acce878e" Dec 13 03:54:29.730201 env[1144]: time="2024-12-13T03:54:29.729606332Z" level=info msg="RemoveContainer for \"91f2f2e081f32eebe622abe42af33627f2a1fc2e2238fcb84c819f76acce878e\"" Dec 13 03:54:29.735972 env[1144]: time="2024-12-13T03:54:29.735898039Z" level=info msg="RemoveContainer for \"91f2f2e081f32eebe622abe42af33627f2a1fc2e2238fcb84c819f76acce878e\" returns successfully" Dec 13 03:54:29.736744 kubelet[1973]: I1213 03:54:29.736517 1973 scope.go:117] "RemoveContainer" containerID="62aa0cf4c5a37bac5551fd2486cd1c4aa1a87a807be557d6459ba2af05f09940" Dec 13 03:54:29.740144 env[1144]: time="2024-12-13T03:54:29.738959543Z" level=info msg="RemoveContainer for \"62aa0cf4c5a37bac5551fd2486cd1c4aa1a87a807be557d6459ba2af05f09940\"" Dec 13 03:54:29.742314 env[1144]: time="2024-12-13T03:54:29.742283551Z" level=info msg="RemoveContainer for \"62aa0cf4c5a37bac5551fd2486cd1c4aa1a87a807be557d6459ba2af05f09940\" returns successfully" Dec 13 03:54:29.742621 kubelet[1973]: I1213 03:54:29.742604 1973 scope.go:117] "RemoveContainer" containerID="cfbfb075499cb0f548168f0ed8063691825ffe2c9c3680d16396454629e79620" Dec 13 03:54:29.743905 env[1144]: time="2024-12-13T03:54:29.743878278Z" level=info msg="RemoveContainer for \"cfbfb075499cb0f548168f0ed8063691825ffe2c9c3680d16396454629e79620\"" Dec 13 03:54:29.754261 env[1144]: time="2024-12-13T03:54:29.754185422Z" level=info msg="RemoveContainer for \"cfbfb075499cb0f548168f0ed8063691825ffe2c9c3680d16396454629e79620\" returns successfully" Dec 13 03:54:29.755497 kubelet[1973]: I1213 03:54:29.755476 1973 scope.go:117] "RemoveContainer" containerID="4328b2baac1d80bcf3353f01c5a5101325c0c14ab44ca435ae8722a47d0e9891" Dec 13 03:54:29.755989 env[1144]: time="2024-12-13T03:54:29.755908210Z" level=error msg="ContainerStatus for \"4328b2baac1d80bcf3353f01c5a5101325c0c14ab44ca435ae8722a47d0e9891\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4328b2baac1d80bcf3353f01c5a5101325c0c14ab44ca435ae8722a47d0e9891\": not found" Dec 13 03:54:29.762350 kubelet[1973]: E1213 03:54:29.762312 1973 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4328b2baac1d80bcf3353f01c5a5101325c0c14ab44ca435ae8722a47d0e9891\": not found" containerID="4328b2baac1d80bcf3353f01c5a5101325c0c14ab44ca435ae8722a47d0e9891" Dec 13 03:54:29.762575 kubelet[1973]: I1213 03:54:29.762491 1973 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4328b2baac1d80bcf3353f01c5a5101325c0c14ab44ca435ae8722a47d0e9891"} err="failed to get container status \"4328b2baac1d80bcf3353f01c5a5101325c0c14ab44ca435ae8722a47d0e9891\": rpc error: code = NotFound desc = an error occurred when try to find container \"4328b2baac1d80bcf3353f01c5a5101325c0c14ab44ca435ae8722a47d0e9891\": not found" Dec 13 03:54:29.762659 kubelet[1973]: I1213 03:54:29.762645 1973 scope.go:117] "RemoveContainer" containerID="4caf3e6f2cb668ba34993afbe58f408840f3cb82ac109fb316bd8e49d06d332b" Dec 13 03:54:29.763042 env[1144]: time="2024-12-13T03:54:29.762977769Z" level=error msg="ContainerStatus for \"4caf3e6f2cb668ba34993afbe58f408840f3cb82ac109fb316bd8e49d06d332b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4caf3e6f2cb668ba34993afbe58f408840f3cb82ac109fb316bd8e49d06d332b\": not found" Dec 13 03:54:29.763325 kubelet[1973]: E1213 03:54:29.763300 1973 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4caf3e6f2cb668ba34993afbe58f408840f3cb82ac109fb316bd8e49d06d332b\": not found" containerID="4caf3e6f2cb668ba34993afbe58f408840f3cb82ac109fb316bd8e49d06d332b" Dec 13 03:54:29.763461 kubelet[1973]: I1213 03:54:29.763438 1973 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4caf3e6f2cb668ba34993afbe58f408840f3cb82ac109fb316bd8e49d06d332b"} err="failed to get container status \"4caf3e6f2cb668ba34993afbe58f408840f3cb82ac109fb316bd8e49d06d332b\": rpc error: code = NotFound desc = an error occurred when try to find container \"4caf3e6f2cb668ba34993afbe58f408840f3cb82ac109fb316bd8e49d06d332b\": not found" Dec 13 03:54:29.763536 kubelet[1973]: I1213 03:54:29.763525 1973 scope.go:117] "RemoveContainer" containerID="91f2f2e081f32eebe622abe42af33627f2a1fc2e2238fcb84c819f76acce878e" Dec 13 03:54:29.763881 env[1144]: time="2024-12-13T03:54:29.763807408Z" level=error msg="ContainerStatus for \"91f2f2e081f32eebe622abe42af33627f2a1fc2e2238fcb84c819f76acce878e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"91f2f2e081f32eebe622abe42af33627f2a1fc2e2238fcb84c819f76acce878e\": not found" Dec 13 03:54:29.764015 kubelet[1973]: E1213 03:54:29.763997 1973 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"91f2f2e081f32eebe622abe42af33627f2a1fc2e2238fcb84c819f76acce878e\": not found" containerID="91f2f2e081f32eebe622abe42af33627f2a1fc2e2238fcb84c819f76acce878e" Dec 13 03:54:29.764126 kubelet[1973]: I1213 03:54:29.764080 1973 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"91f2f2e081f32eebe622abe42af33627f2a1fc2e2238fcb84c819f76acce878e"} err="failed to get container status \"91f2f2e081f32eebe622abe42af33627f2a1fc2e2238fcb84c819f76acce878e\": rpc error: code = NotFound desc = an error occurred when try to find container \"91f2f2e081f32eebe622abe42af33627f2a1fc2e2238fcb84c819f76acce878e\": not found" Dec 13 03:54:29.764208 kubelet[1973]: I1213 03:54:29.764193 1973 scope.go:117] "RemoveContainer" containerID="62aa0cf4c5a37bac5551fd2486cd1c4aa1a87a807be557d6459ba2af05f09940" Dec 13 03:54:29.764498 env[1144]: time="2024-12-13T03:54:29.764447963Z" level=error msg="ContainerStatus for \"62aa0cf4c5a37bac5551fd2486cd1c4aa1a87a807be557d6459ba2af05f09940\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"62aa0cf4c5a37bac5551fd2486cd1c4aa1a87a807be557d6459ba2af05f09940\": not found" Dec 13 03:54:29.764702 kubelet[1973]: E1213 03:54:29.764685 1973 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"62aa0cf4c5a37bac5551fd2486cd1c4aa1a87a807be557d6459ba2af05f09940\": not found" containerID="62aa0cf4c5a37bac5551fd2486cd1c4aa1a87a807be557d6459ba2af05f09940" Dec 13 03:54:29.764788 kubelet[1973]: I1213 03:54:29.764770 1973 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"62aa0cf4c5a37bac5551fd2486cd1c4aa1a87a807be557d6459ba2af05f09940"} err="failed to get container status \"62aa0cf4c5a37bac5551fd2486cd1c4aa1a87a807be557d6459ba2af05f09940\": rpc error: code = NotFound desc = an error occurred when try to find container \"62aa0cf4c5a37bac5551fd2486cd1c4aa1a87a807be557d6459ba2af05f09940\": not found" Dec 13 03:54:29.764854 kubelet[1973]: I1213 03:54:29.764843 1973 scope.go:117] "RemoveContainer" containerID="cfbfb075499cb0f548168f0ed8063691825ffe2c9c3680d16396454629e79620" Dec 13 03:54:29.765137 env[1144]: time="2024-12-13T03:54:29.765077867Z" level=error msg="ContainerStatus for \"cfbfb075499cb0f548168f0ed8063691825ffe2c9c3680d16396454629e79620\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cfbfb075499cb0f548168f0ed8063691825ffe2c9c3680d16396454629e79620\": not found" Dec 13 03:54:29.765311 kubelet[1973]: E1213 03:54:29.765294 1973 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cfbfb075499cb0f548168f0ed8063691825ffe2c9c3680d16396454629e79620\": not found" containerID="cfbfb075499cb0f548168f0ed8063691825ffe2c9c3680d16396454629e79620" Dec 13 03:54:29.765394 kubelet[1973]: I1213 03:54:29.765374 1973 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cfbfb075499cb0f548168f0ed8063691825ffe2c9c3680d16396454629e79620"} err="failed to get container status \"cfbfb075499cb0f548168f0ed8063691825ffe2c9c3680d16396454629e79620\": rpc error: code = NotFound desc = an error occurred when try to find container \"cfbfb075499cb0f548168f0ed8063691825ffe2c9c3680d16396454629e79620\": not found" Dec 13 03:54:29.765456 kubelet[1973]: I1213 03:54:29.765445 1973 scope.go:117] "RemoveContainer" containerID="870b7210f224d02d898e39a36d128b554bdb376c71fdaa0702b5c86e7a3d5591" Dec 13 03:54:29.766737 env[1144]: time="2024-12-13T03:54:29.766712589Z" level=info msg="RemoveContainer for \"870b7210f224d02d898e39a36d128b554bdb376c71fdaa0702b5c86e7a3d5591\"" Dec 13 03:54:29.770418 env[1144]: time="2024-12-13T03:54:29.770373431Z" level=info msg="RemoveContainer for \"870b7210f224d02d898e39a36d128b554bdb376c71fdaa0702b5c86e7a3d5591\" returns successfully" Dec 13 03:54:29.770720 kubelet[1973]: I1213 03:54:29.770703 1973 scope.go:117] "RemoveContainer" containerID="870b7210f224d02d898e39a36d128b554bdb376c71fdaa0702b5c86e7a3d5591" Dec 13 03:54:29.771160 env[1144]: time="2024-12-13T03:54:29.771073156Z" level=error msg="ContainerStatus for \"870b7210f224d02d898e39a36d128b554bdb376c71fdaa0702b5c86e7a3d5591\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"870b7210f224d02d898e39a36d128b554bdb376c71fdaa0702b5c86e7a3d5591\": not found" Dec 13 03:54:29.771319 kubelet[1973]: E1213 03:54:29.771299 1973 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"870b7210f224d02d898e39a36d128b554bdb376c71fdaa0702b5c86e7a3d5591\": not found" containerID="870b7210f224d02d898e39a36d128b554bdb376c71fdaa0702b5c86e7a3d5591" Dec 13 03:54:29.771414 kubelet[1973]: I1213 03:54:29.771391 1973 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"870b7210f224d02d898e39a36d128b554bdb376c71fdaa0702b5c86e7a3d5591"} err="failed to get container status \"870b7210f224d02d898e39a36d128b554bdb376c71fdaa0702b5c86e7a3d5591\": rpc error: code = NotFound desc = an error occurred when try to find container \"870b7210f224d02d898e39a36d128b554bdb376c71fdaa0702b5c86e7a3d5591\": not found" Dec 13 03:54:30.021945 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4328b2baac1d80bcf3353f01c5a5101325c0c14ab44ca435ae8722a47d0e9891-rootfs.mount: Deactivated successfully. Dec 13 03:54:30.024428 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b821f703eb56acbede8588a79f52aa2f1dd91fe6f3120921e99b0286d7ef3bcb-rootfs.mount: Deactivated successfully. Dec 13 03:54:30.024616 systemd[1]: var-lib-kubelet-pods-d50d7f2e\x2d8b98\x2d4bb0\x2db59f\x2d15d8a8087d39-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drmb55.mount: Deactivated successfully. Dec 13 03:54:30.024791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68dd1c897fe1c6b44e88e884776098b7ee37b4c9a716c3cf401bfa2321a108f0-rootfs.mount: Deactivated successfully. Dec 13 03:54:30.024947 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-68dd1c897fe1c6b44e88e884776098b7ee37b4c9a716c3cf401bfa2321a108f0-shm.mount: Deactivated successfully. Dec 13 03:54:30.025214 systemd[1]: var-lib-kubelet-pods-3d4a4f25\x2da9d3\x2d47b4\x2d8463\x2dbe8ab58137b7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkdfhv.mount: Deactivated successfully. Dec 13 03:54:30.025377 systemd[1]: var-lib-kubelet-pods-3d4a4f25\x2da9d3\x2d47b4\x2d8463\x2dbe8ab58137b7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 03:54:30.025538 systemd[1]: var-lib-kubelet-pods-3d4a4f25\x2da9d3\x2d47b4\x2d8463\x2dbe8ab58137b7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 03:54:31.041781 kubelet[1973]: I1213 03:54:31.041724 1973 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d4a4f25-a9d3-47b4-8463-be8ab58137b7" path="/var/lib/kubelet/pods/3d4a4f25-a9d3-47b4-8463-be8ab58137b7/volumes" Dec 13 03:54:31.043926 kubelet[1973]: I1213 03:54:31.043884 1973 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d50d7f2e-8b98-4bb0-b59f-15d8a8087d39" path="/var/lib/kubelet/pods/d50d7f2e-8b98-4bb0-b59f-15d8a8087d39/volumes" Dec 13 03:54:31.062413 sshd[3501]: pam_unix(sshd:session): session closed for user core Dec 13 03:54:31.068679 systemd[1]: Started sshd@23-172.24.4.174:22-172.24.4.1:45728.service. Dec 13 03:54:31.071884 systemd[1]: sshd@22-172.24.4.174:22-172.24.4.1:45720.service: Deactivated successfully. Dec 13 03:54:31.075987 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 03:54:31.076362 systemd[1]: session-23.scope: Consumed 1.381s CPU time. Dec 13 03:54:31.078755 systemd-logind[1131]: Session 23 logged out. Waiting for processes to exit. Dec 13 03:54:31.081755 systemd-logind[1131]: Removed session 23. Dec 13 03:54:32.216458 kubelet[1973]: E1213 03:54:32.216309 1973 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 03:54:32.336361 sshd[3663]: Accepted publickey for core from 172.24.4.1 port 45728 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:54:32.342849 sshd[3663]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:54:32.357995 systemd[1]: Started session-24.scope. Dec 13 03:54:32.359482 systemd-logind[1131]: New session 24 of user core. Dec 13 03:54:34.410398 sshd[3663]: pam_unix(sshd:session): session closed for user core Dec 13 03:54:34.417136 systemd[1]: sshd@23-172.24.4.174:22-172.24.4.1:45728.service: Deactivated successfully. Dec 13 03:54:34.418799 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 03:54:34.421851 systemd-logind[1131]: Session 24 logged out. Waiting for processes to exit. Dec 13 03:54:34.427062 systemd[1]: Started sshd@24-172.24.4.174:22-172.24.4.1:45742.service. Dec 13 03:54:34.433273 systemd-logind[1131]: Removed session 24. Dec 13 03:54:34.637350 kubelet[1973]: I1213 03:54:34.634726 1973 topology_manager.go:215] "Topology Admit Handler" podUID="fc54faff-4d7a-4ca8-806b-604f76a0caf3" podNamespace="kube-system" podName="cilium-9582x" Dec 13 03:54:34.642301 kubelet[1973]: E1213 03:54:34.642245 1973 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d4a4f25-a9d3-47b4-8463-be8ab58137b7" containerName="apply-sysctl-overwrites" Dec 13 03:54:34.646087 kubelet[1973]: E1213 03:54:34.642543 1973 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d4a4f25-a9d3-47b4-8463-be8ab58137b7" containerName="mount-bpf-fs" Dec 13 03:54:34.646087 kubelet[1973]: E1213 03:54:34.642575 1973 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d4a4f25-a9d3-47b4-8463-be8ab58137b7" containerName="mount-cgroup" Dec 13 03:54:34.646087 kubelet[1973]: E1213 03:54:34.642605 1973 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d4a4f25-a9d3-47b4-8463-be8ab58137b7" containerName="clean-cilium-state" Dec 13 03:54:34.646087 kubelet[1973]: E1213 03:54:34.642625 1973 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d50d7f2e-8b98-4bb0-b59f-15d8a8087d39" containerName="cilium-operator" Dec 13 03:54:34.646087 kubelet[1973]: E1213 03:54:34.642642 1973 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d4a4f25-a9d3-47b4-8463-be8ab58137b7" containerName="cilium-agent" Dec 13 03:54:34.646087 kubelet[1973]: I1213 03:54:34.642737 1973 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d4a4f25-a9d3-47b4-8463-be8ab58137b7" containerName="cilium-agent" Dec 13 03:54:34.646087 kubelet[1973]: I1213 03:54:34.642759 1973 memory_manager.go:354] "RemoveStaleState removing state" podUID="d50d7f2e-8b98-4bb0-b59f-15d8a8087d39" containerName="cilium-operator" Dec 13 03:54:34.712685 systemd[1]: Created slice kubepods-burstable-podfc54faff_4d7a_4ca8_806b_604f76a0caf3.slice. Dec 13 03:54:34.811360 kubelet[1973]: I1213 03:54:34.811319 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-bpf-maps\") pod \"cilium-9582x\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " pod="kube-system/cilium-9582x" Dec 13 03:54:34.811503 kubelet[1973]: I1213 03:54:34.811359 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-etc-cni-netd\") pod \"cilium-9582x\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " pod="kube-system/cilium-9582x" Dec 13 03:54:34.811503 kubelet[1973]: I1213 03:54:34.811429 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fc54faff-4d7a-4ca8-806b-604f76a0caf3-clustermesh-secrets\") pod \"cilium-9582x\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " pod="kube-system/cilium-9582x" Dec 13 03:54:34.811503 kubelet[1973]: I1213 03:54:34.811483 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fc54faff-4d7a-4ca8-806b-604f76a0caf3-hubble-tls\") pod \"cilium-9582x\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " pod="kube-system/cilium-9582x" Dec 13 03:54:34.811598 kubelet[1973]: I1213 03:54:34.811506 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-hostproc\") pod \"cilium-9582x\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " pod="kube-system/cilium-9582x" Dec 13 03:54:34.811598 kubelet[1973]: I1213 03:54:34.811525 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-cilium-cgroup\") pod \"cilium-9582x\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " pod="kube-system/cilium-9582x" Dec 13 03:54:34.811598 kubelet[1973]: I1213 03:54:34.811578 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc54faff-4d7a-4ca8-806b-604f76a0caf3-cilium-config-path\") pod \"cilium-9582x\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " pod="kube-system/cilium-9582x" Dec 13 03:54:34.811678 kubelet[1973]: I1213 03:54:34.811602 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6842d\" (UniqueName: \"kubernetes.io/projected/fc54faff-4d7a-4ca8-806b-604f76a0caf3-kube-api-access-6842d\") pod \"cilium-9582x\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " pod="kube-system/cilium-9582x" Dec 13 03:54:34.811678 kubelet[1973]: I1213 03:54:34.811659 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-lib-modules\") pod \"cilium-9582x\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " pod="kube-system/cilium-9582x" Dec 13 03:54:34.811751 kubelet[1973]: I1213 03:54:34.811681 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-cilium-run\") pod \"cilium-9582x\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " pod="kube-system/cilium-9582x" Dec 13 03:54:34.811751 kubelet[1973]: I1213 03:54:34.811699 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-host-proc-sys-net\") pod \"cilium-9582x\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " pod="kube-system/cilium-9582x" Dec 13 03:54:34.811810 kubelet[1973]: I1213 03:54:34.811762 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fc54faff-4d7a-4ca8-806b-604f76a0caf3-cilium-ipsec-secrets\") pod \"cilium-9582x\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " pod="kube-system/cilium-9582x" Dec 13 03:54:34.811839 kubelet[1973]: I1213 03:54:34.811782 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-host-proc-sys-kernel\") pod \"cilium-9582x\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " pod="kube-system/cilium-9582x" Dec 13 03:54:34.811870 kubelet[1973]: I1213 03:54:34.811848 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-cni-path\") pod \"cilium-9582x\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " pod="kube-system/cilium-9582x" Dec 13 03:54:34.811870 kubelet[1973]: I1213 03:54:34.811865 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-xtables-lock\") pod \"cilium-9582x\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " pod="kube-system/cilium-9582x" Dec 13 03:54:35.318932 env[1144]: time="2024-12-13T03:54:35.318841009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9582x,Uid:fc54faff-4d7a-4ca8-806b-604f76a0caf3,Namespace:kube-system,Attempt:0,}" Dec 13 03:54:35.354714 env[1144]: time="2024-12-13T03:54:35.353184022Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:54:35.354714 env[1144]: time="2024-12-13T03:54:35.353278098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:54:35.354714 env[1144]: time="2024-12-13T03:54:35.353310619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:54:35.354714 env[1144]: time="2024-12-13T03:54:35.353620461Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/77cd04e37da65d63da702fc91d70b7b07d6cdaeacc03c1096080fceb30861358 pid=3690 runtime=io.containerd.runc.v2 Dec 13 03:54:35.384940 systemd[1]: Started cri-containerd-77cd04e37da65d63da702fc91d70b7b07d6cdaeacc03c1096080fceb30861358.scope. Dec 13 03:54:35.428240 env[1144]: time="2024-12-13T03:54:35.428162303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9582x,Uid:fc54faff-4d7a-4ca8-806b-604f76a0caf3,Namespace:kube-system,Attempt:0,} returns sandbox id \"77cd04e37da65d63da702fc91d70b7b07d6cdaeacc03c1096080fceb30861358\"" Dec 13 03:54:35.431740 env[1144]: time="2024-12-13T03:54:35.431704791Z" level=info msg="CreateContainer within sandbox \"77cd04e37da65d63da702fc91d70b7b07d6cdaeacc03c1096080fceb30861358\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 03:54:35.450552 env[1144]: time="2024-12-13T03:54:35.450509973Z" level=info msg="CreateContainer within sandbox \"77cd04e37da65d63da702fc91d70b7b07d6cdaeacc03c1096080fceb30861358\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3473507742c5184fca9e52bc802fa4b2afca5f9ac65c52b9fe3b71ec6b25a105\"" Dec 13 03:54:35.452221 env[1144]: time="2024-12-13T03:54:35.452190992Z" level=info msg="StartContainer for \"3473507742c5184fca9e52bc802fa4b2afca5f9ac65c52b9fe3b71ec6b25a105\"" Dec 13 03:54:35.469899 systemd[1]: Started cri-containerd-3473507742c5184fca9e52bc802fa4b2afca5f9ac65c52b9fe3b71ec6b25a105.scope. Dec 13 03:54:35.481200 systemd[1]: cri-containerd-3473507742c5184fca9e52bc802fa4b2afca5f9ac65c52b9fe3b71ec6b25a105.scope: Deactivated successfully. Dec 13 03:54:35.503752 env[1144]: time="2024-12-13T03:54:35.503647898Z" level=info msg="shim disconnected" id=3473507742c5184fca9e52bc802fa4b2afca5f9ac65c52b9fe3b71ec6b25a105 Dec 13 03:54:35.503752 env[1144]: time="2024-12-13T03:54:35.503704354Z" level=warning msg="cleaning up after shim disconnected" id=3473507742c5184fca9e52bc802fa4b2afca5f9ac65c52b9fe3b71ec6b25a105 namespace=k8s.io Dec 13 03:54:35.503752 env[1144]: time="2024-12-13T03:54:35.503717329Z" level=info msg="cleaning up dead shim" Dec 13 03:54:35.517023 env[1144]: time="2024-12-13T03:54:35.515831606Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:54:35Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/3473507742c5184fca9e52bc802fa4b2afca5f9ac65c52b9fe3b71ec6b25a105/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 03:54:35.517023 env[1144]: time="2024-12-13T03:54:35.516083209Z" level=error msg="copy shim log" error="read /proc/self/fd/45: file already closed" Dec 13 03:54:35.517468 env[1144]: time="2024-12-13T03:54:35.517389884Z" level=error msg="Failed to pipe stdout of container \"3473507742c5184fca9e52bc802fa4b2afca5f9ac65c52b9fe3b71ec6b25a105\"" error="reading from a closed fifo" Dec 13 03:54:35.517572 env[1144]: time="2024-12-13T03:54:35.517467190Z" level=error msg="Failed to pipe stderr of container \"3473507742c5184fca9e52bc802fa4b2afca5f9ac65c52b9fe3b71ec6b25a105\"" error="reading from a closed fifo" Dec 13 03:54:35.526037 sshd[3676]: Accepted publickey for core from 172.24.4.1 port 45742 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:54:35.528441 env[1144]: time="2024-12-13T03:54:35.528383606Z" level=error msg="StartContainer for \"3473507742c5184fca9e52bc802fa4b2afca5f9ac65c52b9fe3b71ec6b25a105\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 03:54:35.529587 sshd[3676]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:54:35.533317 kubelet[1973]: E1213 03:54:35.533171 1973 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="3473507742c5184fca9e52bc802fa4b2afca5f9ac65c52b9fe3b71ec6b25a105" Dec 13 03:54:35.538462 systemd[1]: Started session-25.scope. Dec 13 03:54:35.539220 systemd-logind[1131]: New session 25 of user core. Dec 13 03:54:35.540439 kubelet[1973]: E1213 03:54:35.540080 1973 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 03:54:35.540439 kubelet[1973]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 03:54:35.540439 kubelet[1973]: rm /hostbin/cilium-mount Dec 13 03:54:35.540622 kubelet[1973]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6842d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-9582x_kube-system(fc54faff-4d7a-4ca8-806b-604f76a0caf3): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 03:54:35.540622 kubelet[1973]: E1213 03:54:35.540294 1973 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-9582x" podUID="fc54faff-4d7a-4ca8-806b-604f76a0caf3" Dec 13 03:54:35.717652 env[1144]: time="2024-12-13T03:54:35.717562593Z" level=info msg="CreateContainer within sandbox \"77cd04e37da65d63da702fc91d70b7b07d6cdaeacc03c1096080fceb30861358\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Dec 13 03:54:35.743520 env[1144]: time="2024-12-13T03:54:35.743379212Z" level=info msg="CreateContainer within sandbox \"77cd04e37da65d63da702fc91d70b7b07d6cdaeacc03c1096080fceb30861358\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"9aff6cd557b4248ad73b93df1fd7ebc652d735f54ec7d16eefc1fe709b9f2706\"" Dec 13 03:54:35.745346 env[1144]: time="2024-12-13T03:54:35.745272481Z" level=info msg="StartContainer for \"9aff6cd557b4248ad73b93df1fd7ebc652d735f54ec7d16eefc1fe709b9f2706\"" Dec 13 03:54:35.784752 systemd[1]: Started cri-containerd-9aff6cd557b4248ad73b93df1fd7ebc652d735f54ec7d16eefc1fe709b9f2706.scope. Dec 13 03:54:35.801265 systemd[1]: cri-containerd-9aff6cd557b4248ad73b93df1fd7ebc652d735f54ec7d16eefc1fe709b9f2706.scope: Deactivated successfully. Dec 13 03:54:35.813481 env[1144]: time="2024-12-13T03:54:35.813424363Z" level=info msg="shim disconnected" id=9aff6cd557b4248ad73b93df1fd7ebc652d735f54ec7d16eefc1fe709b9f2706 Dec 13 03:54:35.813720 env[1144]: time="2024-12-13T03:54:35.813696574Z" level=warning msg="cleaning up after shim disconnected" id=9aff6cd557b4248ad73b93df1fd7ebc652d735f54ec7d16eefc1fe709b9f2706 namespace=k8s.io Dec 13 03:54:35.813826 env[1144]: time="2024-12-13T03:54:35.813808636Z" level=info msg="cleaning up dead shim" Dec 13 03:54:35.826272 env[1144]: time="2024-12-13T03:54:35.826204632Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:54:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3788 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T03:54:35Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/9aff6cd557b4248ad73b93df1fd7ebc652d735f54ec7d16eefc1fe709b9f2706/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 03:54:35.826774 env[1144]: time="2024-12-13T03:54:35.826719109Z" level=error msg="copy shim log" error="read /proc/self/fd/45: file already closed" Dec 13 03:54:35.828344 env[1144]: time="2024-12-13T03:54:35.827589034Z" level=error msg="Failed to pipe stdout of container \"9aff6cd557b4248ad73b93df1fd7ebc652d735f54ec7d16eefc1fe709b9f2706\"" error="reading from a closed fifo" Dec 13 03:54:35.828423 env[1144]: time="2024-12-13T03:54:35.828213637Z" level=error msg="Failed to pipe stderr of container \"9aff6cd557b4248ad73b93df1fd7ebc652d735f54ec7d16eefc1fe709b9f2706\"" error="reading from a closed fifo" Dec 13 03:54:35.832182 env[1144]: time="2024-12-13T03:54:35.832141710Z" level=error msg="StartContainer for \"9aff6cd557b4248ad73b93df1fd7ebc652d735f54ec7d16eefc1fe709b9f2706\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 03:54:35.832542 kubelet[1973]: E1213 03:54:35.832481 1973 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="9aff6cd557b4248ad73b93df1fd7ebc652d735f54ec7d16eefc1fe709b9f2706" Dec 13 03:54:35.832948 kubelet[1973]: E1213 03:54:35.832643 1973 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 03:54:35.832948 kubelet[1973]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 03:54:35.832948 kubelet[1973]: rm /hostbin/cilium-mount Dec 13 03:54:35.832948 kubelet[1973]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6842d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-9582x_kube-system(fc54faff-4d7a-4ca8-806b-604f76a0caf3): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 03:54:35.832948 kubelet[1973]: E1213 03:54:35.832683 1973 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-9582x" podUID="fc54faff-4d7a-4ca8-806b-604f76a0caf3" Dec 13 03:54:36.426467 sshd[3676]: pam_unix(sshd:session): session closed for user core Dec 13 03:54:36.433607 systemd[1]: Started sshd@25-172.24.4.174:22-172.24.4.1:50334.service. Dec 13 03:54:36.434778 systemd[1]: sshd@24-172.24.4.174:22-172.24.4.1:45742.service: Deactivated successfully. Dec 13 03:54:36.438496 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 03:54:36.440696 systemd-logind[1131]: Session 25 logged out. Waiting for processes to exit. Dec 13 03:54:36.443887 systemd-logind[1131]: Removed session 25. Dec 13 03:54:36.720750 kubelet[1973]: I1213 03:54:36.720498 1973 scope.go:117] "RemoveContainer" containerID="3473507742c5184fca9e52bc802fa4b2afca5f9ac65c52b9fe3b71ec6b25a105" Dec 13 03:54:36.722572 env[1144]: time="2024-12-13T03:54:36.722516209Z" level=info msg="StopPodSandbox for \"77cd04e37da65d63da702fc91d70b7b07d6cdaeacc03c1096080fceb30861358\"" Dec 13 03:54:36.723938 env[1144]: time="2024-12-13T03:54:36.723857771Z" level=info msg="Container to stop \"9aff6cd557b4248ad73b93df1fd7ebc652d735f54ec7d16eefc1fe709b9f2706\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 03:54:36.724621 env[1144]: time="2024-12-13T03:54:36.724551394Z" level=info msg="Container to stop \"3473507742c5184fca9e52bc802fa4b2afca5f9ac65c52b9fe3b71ec6b25a105\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 03:54:36.729209 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-77cd04e37da65d63da702fc91d70b7b07d6cdaeacc03c1096080fceb30861358-shm.mount: Deactivated successfully. Dec 13 03:54:36.732057 env[1144]: time="2024-12-13T03:54:36.724406753Z" level=info msg="RemoveContainer for \"3473507742c5184fca9e52bc802fa4b2afca5f9ac65c52b9fe3b71ec6b25a105\"" Dec 13 03:54:36.742881 env[1144]: time="2024-12-13T03:54:36.742734316Z" level=info msg="RemoveContainer for \"3473507742c5184fca9e52bc802fa4b2afca5f9ac65c52b9fe3b71ec6b25a105\" returns successfully" Dec 13 03:54:36.751454 systemd[1]: cri-containerd-77cd04e37da65d63da702fc91d70b7b07d6cdaeacc03c1096080fceb30861358.scope: Deactivated successfully. Dec 13 03:54:36.810095 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77cd04e37da65d63da702fc91d70b7b07d6cdaeacc03c1096080fceb30861358-rootfs.mount: Deactivated successfully. Dec 13 03:54:36.817579 env[1144]: time="2024-12-13T03:54:36.817513530Z" level=info msg="shim disconnected" id=77cd04e37da65d63da702fc91d70b7b07d6cdaeacc03c1096080fceb30861358 Dec 13 03:54:36.817908 env[1144]: time="2024-12-13T03:54:36.817852658Z" level=warning msg="cleaning up after shim disconnected" id=77cd04e37da65d63da702fc91d70b7b07d6cdaeacc03c1096080fceb30861358 namespace=k8s.io Dec 13 03:54:36.818029 env[1144]: time="2024-12-13T03:54:36.818007208Z" level=info msg="cleaning up dead shim" Dec 13 03:54:36.831415 env[1144]: time="2024-12-13T03:54:36.831356725Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:54:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3830 runtime=io.containerd.runc.v2\n" Dec 13 03:54:36.832151 env[1144]: time="2024-12-13T03:54:36.832051802Z" level=info msg="TearDown network for sandbox \"77cd04e37da65d63da702fc91d70b7b07d6cdaeacc03c1096080fceb30861358\" successfully" Dec 13 03:54:36.832484 env[1144]: time="2024-12-13T03:54:36.832300249Z" level=info msg="StopPodSandbox for \"77cd04e37da65d63da702fc91d70b7b07d6cdaeacc03c1096080fceb30861358\" returns successfully" Dec 13 03:54:36.933045 kubelet[1973]: I1213 03:54:36.932974 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-cilium-cgroup\") pod \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " Dec 13 03:54:36.933045 kubelet[1973]: I1213 03:54:36.933052 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-host-proc-sys-kernel\") pod \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " Dec 13 03:54:36.934083 kubelet[1973]: I1213 03:54:36.933130 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-cilium-run\") pod \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " Dec 13 03:54:36.934083 kubelet[1973]: I1213 03:54:36.933187 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc54faff-4d7a-4ca8-806b-604f76a0caf3-cilium-config-path\") pod \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " Dec 13 03:54:36.934083 kubelet[1973]: I1213 03:54:36.933233 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fc54faff-4d7a-4ca8-806b-604f76a0caf3-cilium-ipsec-secrets\") pod \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " Dec 13 03:54:36.934083 kubelet[1973]: I1213 03:54:36.933273 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-etc-cni-netd\") pod \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " Dec 13 03:54:36.934083 kubelet[1973]: I1213 03:54:36.933383 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fc54faff-4d7a-4ca8-806b-604f76a0caf3-hubble-tls\") pod \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " Dec 13 03:54:36.934083 kubelet[1973]: I1213 03:54:36.933427 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6842d\" (UniqueName: \"kubernetes.io/projected/fc54faff-4d7a-4ca8-806b-604f76a0caf3-kube-api-access-6842d\") pod \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " Dec 13 03:54:36.934083 kubelet[1973]: I1213 03:54:36.933466 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-hostproc\") pod \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " Dec 13 03:54:36.934083 kubelet[1973]: I1213 03:54:36.933505 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-xtables-lock\") pod \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " Dec 13 03:54:36.934083 kubelet[1973]: I1213 03:54:36.933544 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-lib-modules\") pod \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " Dec 13 03:54:36.934083 kubelet[1973]: I1213 03:54:36.933579 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-host-proc-sys-net\") pod \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " Dec 13 03:54:36.934083 kubelet[1973]: I1213 03:54:36.933614 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-cni-path\") pod \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " Dec 13 03:54:36.934083 kubelet[1973]: I1213 03:54:36.933649 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-bpf-maps\") pod \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " Dec 13 03:54:36.934083 kubelet[1973]: I1213 03:54:36.933690 1973 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fc54faff-4d7a-4ca8-806b-604f76a0caf3-clustermesh-secrets\") pod \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\" (UID: \"fc54faff-4d7a-4ca8-806b-604f76a0caf3\") " Dec 13 03:54:36.937193 kubelet[1973]: I1213 03:54:36.935823 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fc54faff-4d7a-4ca8-806b-604f76a0caf3" (UID: "fc54faff-4d7a-4ca8-806b-604f76a0caf3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:36.937193 kubelet[1973]: I1213 03:54:36.936716 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-hostproc" (OuterVolumeSpecName: "hostproc") pod "fc54faff-4d7a-4ca8-806b-604f76a0caf3" (UID: "fc54faff-4d7a-4ca8-806b-604f76a0caf3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:36.937193 kubelet[1973]: I1213 03:54:36.936803 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fc54faff-4d7a-4ca8-806b-604f76a0caf3" (UID: "fc54faff-4d7a-4ca8-806b-604f76a0caf3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:36.937193 kubelet[1973]: I1213 03:54:36.936861 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fc54faff-4d7a-4ca8-806b-604f76a0caf3" (UID: "fc54faff-4d7a-4ca8-806b-604f76a0caf3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:36.937193 kubelet[1973]: I1213 03:54:36.936914 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fc54faff-4d7a-4ca8-806b-604f76a0caf3" (UID: "fc54faff-4d7a-4ca8-806b-604f76a0caf3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:36.937193 kubelet[1973]: I1213 03:54:36.936971 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-cni-path" (OuterVolumeSpecName: "cni-path") pod "fc54faff-4d7a-4ca8-806b-604f76a0caf3" (UID: "fc54faff-4d7a-4ca8-806b-604f76a0caf3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:36.937193 kubelet[1973]: I1213 03:54:36.937026 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fc54faff-4d7a-4ca8-806b-604f76a0caf3" (UID: "fc54faff-4d7a-4ca8-806b-604f76a0caf3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:36.942260 systemd[1]: var-lib-kubelet-pods-fc54faff\x2d4d7a\x2d4ca8\x2d806b\x2d604f76a0caf3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 03:54:36.945585 kubelet[1973]: I1213 03:54:36.945291 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fc54faff-4d7a-4ca8-806b-604f76a0caf3" (UID: "fc54faff-4d7a-4ca8-806b-604f76a0caf3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:36.945585 kubelet[1973]: I1213 03:54:36.945371 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fc54faff-4d7a-4ca8-806b-604f76a0caf3" (UID: "fc54faff-4d7a-4ca8-806b-604f76a0caf3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:36.945585 kubelet[1973]: I1213 03:54:36.945420 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fc54faff-4d7a-4ca8-806b-604f76a0caf3" (UID: "fc54faff-4d7a-4ca8-806b-604f76a0caf3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:36.946811 kubelet[1973]: I1213 03:54:36.946765 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc54faff-4d7a-4ca8-806b-604f76a0caf3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fc54faff-4d7a-4ca8-806b-604f76a0caf3" (UID: "fc54faff-4d7a-4ca8-806b-604f76a0caf3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:54:36.953571 systemd[1]: var-lib-kubelet-pods-fc54faff\x2d4d7a\x2d4ca8\x2d806b\x2d604f76a0caf3-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 03:54:36.956806 kubelet[1973]: I1213 03:54:36.956745 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc54faff-4d7a-4ca8-806b-604f76a0caf3-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "fc54faff-4d7a-4ca8-806b-604f76a0caf3" (UID: "fc54faff-4d7a-4ca8-806b-604f76a0caf3"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:54:36.963603 systemd[1]: var-lib-kubelet-pods-fc54faff\x2d4d7a\x2d4ca8\x2d806b\x2d604f76a0caf3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 03:54:36.967529 kubelet[1973]: I1213 03:54:36.967469 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc54faff-4d7a-4ca8-806b-604f76a0caf3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fc54faff-4d7a-4ca8-806b-604f76a0caf3" (UID: "fc54faff-4d7a-4ca8-806b-604f76a0caf3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:54:36.969427 kubelet[1973]: I1213 03:54:36.969361 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc54faff-4d7a-4ca8-806b-604f76a0caf3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fc54faff-4d7a-4ca8-806b-604f76a0caf3" (UID: "fc54faff-4d7a-4ca8-806b-604f76a0caf3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:54:36.974495 systemd[1]: var-lib-kubelet-pods-fc54faff\x2d4d7a\x2d4ca8\x2d806b\x2d604f76a0caf3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6842d.mount: Deactivated successfully. Dec 13 03:54:36.976392 kubelet[1973]: I1213 03:54:36.976326 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc54faff-4d7a-4ca8-806b-604f76a0caf3-kube-api-access-6842d" (OuterVolumeSpecName: "kube-api-access-6842d") pod "fc54faff-4d7a-4ca8-806b-604f76a0caf3" (UID: "fc54faff-4d7a-4ca8-806b-604f76a0caf3"). InnerVolumeSpecName "kube-api-access-6842d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:54:37.034527 kubelet[1973]: I1213 03:54:37.034453 1973 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fc54faff-4d7a-4ca8-806b-604f76a0caf3-hubble-tls\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:37.035004 kubelet[1973]: I1213 03:54:37.034965 1973 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-6842d\" (UniqueName: \"kubernetes.io/projected/fc54faff-4d7a-4ca8-806b-604f76a0caf3-kube-api-access-6842d\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:37.036077 kubelet[1973]: I1213 03:54:37.036031 1973 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-hostproc\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:37.036390 kubelet[1973]: I1213 03:54:37.036354 1973 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-xtables-lock\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:37.036614 kubelet[1973]: I1213 03:54:37.036579 1973 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-bpf-maps\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:37.036838 kubelet[1973]: I1213 03:54:37.036795 1973 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fc54faff-4d7a-4ca8-806b-604f76a0caf3-clustermesh-secrets\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:37.037166 kubelet[1973]: I1213 03:54:37.037097 1973 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-lib-modules\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:37.037380 kubelet[1973]: I1213 03:54:37.037349 1973 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-host-proc-sys-net\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:37.037590 kubelet[1973]: I1213 03:54:37.037554 1973 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-cni-path\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:37.037849 kubelet[1973]: I1213 03:54:37.037810 1973 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-cilium-cgroup\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:37.038025 kubelet[1973]: I1213 03:54:37.037996 1973 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-host-proc-sys-kernel\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:37.038287 kubelet[1973]: I1213 03:54:37.038248 1973 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-cilium-run\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:37.038875 kubelet[1973]: I1213 03:54:37.038726 1973 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc54faff-4d7a-4ca8-806b-604f76a0caf3-etc-cni-netd\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:37.039354 kubelet[1973]: I1213 03:54:37.039127 1973 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc54faff-4d7a-4ca8-806b-604f76a0caf3-cilium-config-path\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:37.039354 kubelet[1973]: I1213 03:54:37.039201 1973 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fc54faff-4d7a-4ca8-806b-604f76a0caf3-cilium-ipsec-secrets\") on node \"ci-3510-3-6-5-5611054123.novalocal\" DevicePath \"\"" Dec 13 03:54:37.049050 systemd[1]: Removed slice kubepods-burstable-podfc54faff_4d7a_4ca8_806b_604f76a0caf3.slice. Dec 13 03:54:37.218012 kubelet[1973]: E1213 03:54:37.217898 1973 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 03:54:37.698905 sshd[3809]: Accepted publickey for core from 172.24.4.1 port 50334 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:54:37.702278 sshd[3809]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:54:37.713796 systemd-logind[1131]: New session 26 of user core. Dec 13 03:54:37.716184 systemd[1]: Started session-26.scope. Dec 13 03:54:37.729957 kubelet[1973]: I1213 03:54:37.729877 1973 scope.go:117] "RemoveContainer" containerID="9aff6cd557b4248ad73b93df1fd7ebc652d735f54ec7d16eefc1fe709b9f2706" Dec 13 03:54:37.738960 env[1144]: time="2024-12-13T03:54:37.738898797Z" level=info msg="RemoveContainer for \"9aff6cd557b4248ad73b93df1fd7ebc652d735f54ec7d16eefc1fe709b9f2706\"" Dec 13 03:54:38.156871 env[1144]: time="2024-12-13T03:54:38.156801776Z" level=info msg="RemoveContainer for \"9aff6cd557b4248ad73b93df1fd7ebc652d735f54ec7d16eefc1fe709b9f2706\" returns successfully" Dec 13 03:54:38.839323 kubelet[1973]: W1213 03:54:38.664013 1973 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc54faff_4d7a_4ca8_806b_604f76a0caf3.slice/cri-containerd-3473507742c5184fca9e52bc802fa4b2afca5f9ac65c52b9fe3b71ec6b25a105.scope WatchSource:0}: container "3473507742c5184fca9e52bc802fa4b2afca5f9ac65c52b9fe3b71ec6b25a105" in namespace "k8s.io": not found Dec 13 03:54:39.077014 kubelet[1973]: I1213 03:54:39.076965 1973 topology_manager.go:215] "Topology Admit Handler" podUID="79f32437-377d-4486-83ec-18e83118c455" podNamespace="kube-system" podName="cilium-gtkzn" Dec 13 03:54:39.077251 kubelet[1973]: E1213 03:54:39.077237 1973 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fc54faff-4d7a-4ca8-806b-604f76a0caf3" containerName="mount-cgroup" Dec 13 03:54:39.077448 kubelet[1973]: E1213 03:54:39.077434 1973 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fc54faff-4d7a-4ca8-806b-604f76a0caf3" containerName="mount-cgroup" Dec 13 03:54:39.077571 kubelet[1973]: I1213 03:54:39.077558 1973 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc54faff-4d7a-4ca8-806b-604f76a0caf3" containerName="mount-cgroup" Dec 13 03:54:39.077665 kubelet[1973]: I1213 03:54:39.077654 1973 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc54faff-4d7a-4ca8-806b-604f76a0caf3" containerName="mount-cgroup" Dec 13 03:54:39.103565 systemd[1]: Created slice kubepods-burstable-pod79f32437_377d_4486_83ec_18e83118c455.slice. Dec 13 03:54:39.154910 kubelet[1973]: I1213 03:54:39.154865 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/79f32437-377d-4486-83ec-18e83118c455-host-proc-sys-net\") pod \"cilium-gtkzn\" (UID: \"79f32437-377d-4486-83ec-18e83118c455\") " pod="kube-system/cilium-gtkzn" Dec 13 03:54:39.155065 kubelet[1973]: I1213 03:54:39.154935 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/79f32437-377d-4486-83ec-18e83118c455-bpf-maps\") pod \"cilium-gtkzn\" (UID: \"79f32437-377d-4486-83ec-18e83118c455\") " pod="kube-system/cilium-gtkzn" Dec 13 03:54:39.155065 kubelet[1973]: I1213 03:54:39.154963 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/79f32437-377d-4486-83ec-18e83118c455-etc-cni-netd\") pod \"cilium-gtkzn\" (UID: \"79f32437-377d-4486-83ec-18e83118c455\") " pod="kube-system/cilium-gtkzn" Dec 13 03:54:39.155065 kubelet[1973]: I1213 03:54:39.154986 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/79f32437-377d-4486-83ec-18e83118c455-cilium-ipsec-secrets\") pod \"cilium-gtkzn\" (UID: \"79f32437-377d-4486-83ec-18e83118c455\") " pod="kube-system/cilium-gtkzn" Dec 13 03:54:39.155065 kubelet[1973]: I1213 03:54:39.155026 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/79f32437-377d-4486-83ec-18e83118c455-cilium-cgroup\") pod \"cilium-gtkzn\" (UID: \"79f32437-377d-4486-83ec-18e83118c455\") " pod="kube-system/cilium-gtkzn" Dec 13 03:54:39.155065 kubelet[1973]: I1213 03:54:39.155047 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/79f32437-377d-4486-83ec-18e83118c455-cni-path\") pod \"cilium-gtkzn\" (UID: \"79f32437-377d-4486-83ec-18e83118c455\") " pod="kube-system/cilium-gtkzn" Dec 13 03:54:39.155065 kubelet[1973]: I1213 03:54:39.155065 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79f32437-377d-4486-83ec-18e83118c455-cilium-config-path\") pod \"cilium-gtkzn\" (UID: \"79f32437-377d-4486-83ec-18e83118c455\") " pod="kube-system/cilium-gtkzn" Dec 13 03:54:39.155336 kubelet[1973]: I1213 03:54:39.155110 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/79f32437-377d-4486-83ec-18e83118c455-host-proc-sys-kernel\") pod \"cilium-gtkzn\" (UID: \"79f32437-377d-4486-83ec-18e83118c455\") " pod="kube-system/cilium-gtkzn" Dec 13 03:54:39.155336 kubelet[1973]: I1213 03:54:39.155131 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/79f32437-377d-4486-83ec-18e83118c455-hubble-tls\") pod \"cilium-gtkzn\" (UID: \"79f32437-377d-4486-83ec-18e83118c455\") " pod="kube-system/cilium-gtkzn" Dec 13 03:54:39.155336 kubelet[1973]: I1213 03:54:39.155150 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79f32437-377d-4486-83ec-18e83118c455-lib-modules\") pod \"cilium-gtkzn\" (UID: \"79f32437-377d-4486-83ec-18e83118c455\") " pod="kube-system/cilium-gtkzn" Dec 13 03:54:39.155336 kubelet[1973]: I1213 03:54:39.155187 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/79f32437-377d-4486-83ec-18e83118c455-cilium-run\") pod \"cilium-gtkzn\" (UID: \"79f32437-377d-4486-83ec-18e83118c455\") " pod="kube-system/cilium-gtkzn" Dec 13 03:54:39.155336 kubelet[1973]: I1213 03:54:39.155206 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79f32437-377d-4486-83ec-18e83118c455-xtables-lock\") pod \"cilium-gtkzn\" (UID: \"79f32437-377d-4486-83ec-18e83118c455\") " pod="kube-system/cilium-gtkzn" Dec 13 03:54:39.155336 kubelet[1973]: I1213 03:54:39.155225 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/79f32437-377d-4486-83ec-18e83118c455-clustermesh-secrets\") pod \"cilium-gtkzn\" (UID: \"79f32437-377d-4486-83ec-18e83118c455\") " pod="kube-system/cilium-gtkzn" Dec 13 03:54:39.155336 kubelet[1973]: I1213 03:54:39.155260 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh5cf\" (UniqueName: \"kubernetes.io/projected/79f32437-377d-4486-83ec-18e83118c455-kube-api-access-gh5cf\") pod \"cilium-gtkzn\" (UID: \"79f32437-377d-4486-83ec-18e83118c455\") " pod="kube-system/cilium-gtkzn" Dec 13 03:54:39.155336 kubelet[1973]: I1213 03:54:39.155280 1973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/79f32437-377d-4486-83ec-18e83118c455-hostproc\") pod \"cilium-gtkzn\" (UID: \"79f32437-377d-4486-83ec-18e83118c455\") " pod="kube-system/cilium-gtkzn" Dec 13 03:54:39.408923 env[1144]: time="2024-12-13T03:54:39.408842525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gtkzn,Uid:79f32437-377d-4486-83ec-18e83118c455,Namespace:kube-system,Attempt:0,}" Dec 13 03:54:39.444200 env[1144]: time="2024-12-13T03:54:39.444051590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:54:39.444744 env[1144]: time="2024-12-13T03:54:39.444672967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:54:39.445065 env[1144]: time="2024-12-13T03:54:39.444989371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:54:39.446466 env[1144]: time="2024-12-13T03:54:39.446384513Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1fccc317c3bce328d5afca37585b060ef98028d9fbdc8ea752520c0ec9dba54a pid=3864 runtime=io.containerd.runc.v2 Dec 13 03:54:39.481349 systemd[1]: Started cri-containerd-1fccc317c3bce328d5afca37585b060ef98028d9fbdc8ea752520c0ec9dba54a.scope. Dec 13 03:54:39.535395 env[1144]: time="2024-12-13T03:54:39.535327123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gtkzn,Uid:79f32437-377d-4486-83ec-18e83118c455,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fccc317c3bce328d5afca37585b060ef98028d9fbdc8ea752520c0ec9dba54a\"" Dec 13 03:54:39.538994 env[1144]: time="2024-12-13T03:54:39.538960691Z" level=info msg="CreateContainer within sandbox \"1fccc317c3bce328d5afca37585b060ef98028d9fbdc8ea752520c0ec9dba54a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 03:54:39.576620 env[1144]: time="2024-12-13T03:54:39.576549629Z" level=info msg="CreateContainer within sandbox \"1fccc317c3bce328d5afca37585b060ef98028d9fbdc8ea752520c0ec9dba54a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5490a47cd579769debf89ed34c143224d99d4a760fd83e74dfe4a34125cd7d49\"" Dec 13 03:54:39.578682 env[1144]: time="2024-12-13T03:54:39.578645657Z" level=info msg="StartContainer for \"5490a47cd579769debf89ed34c143224d99d4a760fd83e74dfe4a34125cd7d49\"" Dec 13 03:54:39.602401 systemd[1]: Started cri-containerd-5490a47cd579769debf89ed34c143224d99d4a760fd83e74dfe4a34125cd7d49.scope. Dec 13 03:54:39.770078 env[1144]: time="2024-12-13T03:54:39.768185398Z" level=info msg="StartContainer for \"5490a47cd579769debf89ed34c143224d99d4a760fd83e74dfe4a34125cd7d49\" returns successfully" Dec 13 03:54:39.933403 systemd[1]: cri-containerd-5490a47cd579769debf89ed34c143224d99d4a760fd83e74dfe4a34125cd7d49.scope: Deactivated successfully. Dec 13 03:54:40.001519 env[1144]: time="2024-12-13T03:54:40.001456218Z" level=info msg="shim disconnected" id=5490a47cd579769debf89ed34c143224d99d4a760fd83e74dfe4a34125cd7d49 Dec 13 03:54:40.002003 env[1144]: time="2024-12-13T03:54:40.001948343Z" level=warning msg="cleaning up after shim disconnected" id=5490a47cd579769debf89ed34c143224d99d4a760fd83e74dfe4a34125cd7d49 namespace=k8s.io Dec 13 03:54:40.002070 env[1144]: time="2024-12-13T03:54:40.002002745Z" level=info msg="cleaning up dead shim" Dec 13 03:54:40.011516 env[1144]: time="2024-12-13T03:54:40.011474955Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:54:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3948 runtime=io.containerd.runc.v2\n" Dec 13 03:54:40.785532 env[1144]: time="2024-12-13T03:54:40.785405784Z" level=info msg="CreateContainer within sandbox \"1fccc317c3bce328d5afca37585b060ef98028d9fbdc8ea752520c0ec9dba54a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 03:54:40.819678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount364437421.mount: Deactivated successfully. Dec 13 03:54:40.839328 env[1144]: time="2024-12-13T03:54:40.839266536Z" level=info msg="CreateContainer within sandbox \"1fccc317c3bce328d5afca37585b060ef98028d9fbdc8ea752520c0ec9dba54a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6e8a4bf9612f18e744c51cb36ef0adc8885dd42e79afe86c5003d418766431fb\"" Dec 13 03:54:40.840278 env[1144]: time="2024-12-13T03:54:40.840195542Z" level=info msg="StartContainer for \"6e8a4bf9612f18e744c51cb36ef0adc8885dd42e79afe86c5003d418766431fb\"" Dec 13 03:54:40.883567 systemd[1]: Started cri-containerd-6e8a4bf9612f18e744c51cb36ef0adc8885dd42e79afe86c5003d418766431fb.scope. Dec 13 03:54:40.921134 kubelet[1973]: I1213 03:54:40.920800 1973 setters.go:580] "Node became not ready" node="ci-3510-3-6-5-5611054123.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T03:54:40Z","lastTransitionTime":"2024-12-13T03:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 03:54:40.923322 env[1144]: time="2024-12-13T03:54:40.923219937Z" level=info msg="StartContainer for \"6e8a4bf9612f18e744c51cb36ef0adc8885dd42e79afe86c5003d418766431fb\" returns successfully" Dec 13 03:54:40.949794 systemd[1]: cri-containerd-6e8a4bf9612f18e744c51cb36ef0adc8885dd42e79afe86c5003d418766431fb.scope: Deactivated successfully. Dec 13 03:54:40.987419 env[1144]: time="2024-12-13T03:54:40.987350588Z" level=info msg="shim disconnected" id=6e8a4bf9612f18e744c51cb36ef0adc8885dd42e79afe86c5003d418766431fb Dec 13 03:54:40.987419 env[1144]: time="2024-12-13T03:54:40.987422052Z" level=warning msg="cleaning up after shim disconnected" id=6e8a4bf9612f18e744c51cb36ef0adc8885dd42e79afe86c5003d418766431fb namespace=k8s.io Dec 13 03:54:40.987764 env[1144]: time="2024-12-13T03:54:40.987434686Z" level=info msg="cleaning up dead shim" Dec 13 03:54:40.997863 env[1144]: time="2024-12-13T03:54:40.997814110Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:54:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4009 runtime=io.containerd.runc.v2\n" Dec 13 03:54:41.039189 kubelet[1973]: I1213 03:54:41.038123 1973 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc54faff-4d7a-4ca8-806b-604f76a0caf3" path="/var/lib/kubelet/pods/fc54faff-4d7a-4ca8-806b-604f76a0caf3/volumes" Dec 13 03:54:41.267848 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e8a4bf9612f18e744c51cb36ef0adc8885dd42e79afe86c5003d418766431fb-rootfs.mount: Deactivated successfully. Dec 13 03:54:41.792412 env[1144]: time="2024-12-13T03:54:41.792243054Z" level=info msg="CreateContainer within sandbox \"1fccc317c3bce328d5afca37585b060ef98028d9fbdc8ea752520c0ec9dba54a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 03:54:42.028138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1292382066.mount: Deactivated successfully. Dec 13 03:54:42.036587 env[1144]: time="2024-12-13T03:54:42.036503333Z" level=info msg="CreateContainer within sandbox \"1fccc317c3bce328d5afca37585b060ef98028d9fbdc8ea752520c0ec9dba54a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6d0b196aa4f3ce7e9d9c59689e8b793169bc6163621cf57a8f6378b2b152bcad\"" Dec 13 03:54:42.039680 env[1144]: time="2024-12-13T03:54:42.038003191Z" level=info msg="StartContainer for \"6d0b196aa4f3ce7e9d9c59689e8b793169bc6163621cf57a8f6378b2b152bcad\"" Dec 13 03:54:42.089493 systemd[1]: Started cri-containerd-6d0b196aa4f3ce7e9d9c59689e8b793169bc6163621cf57a8f6378b2b152bcad.scope. Dec 13 03:54:42.143824 env[1144]: time="2024-12-13T03:54:42.143736450Z" level=info msg="StartContainer for \"6d0b196aa4f3ce7e9d9c59689e8b793169bc6163621cf57a8f6378b2b152bcad\" returns successfully" Dec 13 03:54:42.155517 systemd[1]: cri-containerd-6d0b196aa4f3ce7e9d9c59689e8b793169bc6163621cf57a8f6378b2b152bcad.scope: Deactivated successfully. Dec 13 03:54:42.209236 env[1144]: time="2024-12-13T03:54:42.209092869Z" level=info msg="shim disconnected" id=6d0b196aa4f3ce7e9d9c59689e8b793169bc6163621cf57a8f6378b2b152bcad Dec 13 03:54:42.209236 env[1144]: time="2024-12-13T03:54:42.209219467Z" level=warning msg="cleaning up after shim disconnected" id=6d0b196aa4f3ce7e9d9c59689e8b793169bc6163621cf57a8f6378b2b152bcad namespace=k8s.io Dec 13 03:54:42.209644 env[1144]: time="2024-12-13T03:54:42.209243583Z" level=info msg="cleaning up dead shim" Dec 13 03:54:42.219087 kubelet[1973]: E1213 03:54:42.219017 1973 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 03:54:42.222175 env[1144]: time="2024-12-13T03:54:42.222125248Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:54:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4068 runtime=io.containerd.runc.v2\n" Dec 13 03:54:42.265354 systemd[1]: run-containerd-runc-k8s.io-6d0b196aa4f3ce7e9d9c59689e8b793169bc6163621cf57a8f6378b2b152bcad-runc.j8IgOi.mount: Deactivated successfully. Dec 13 03:54:42.265560 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d0b196aa4f3ce7e9d9c59689e8b793169bc6163621cf57a8f6378b2b152bcad-rootfs.mount: Deactivated successfully. Dec 13 03:54:42.800875 env[1144]: time="2024-12-13T03:54:42.799168367Z" level=info msg="CreateContainer within sandbox \"1fccc317c3bce328d5afca37585b060ef98028d9fbdc8ea752520c0ec9dba54a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 03:54:42.834826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount616192159.mount: Deactivated successfully. Dec 13 03:54:42.839031 env[1144]: time="2024-12-13T03:54:42.838945531Z" level=info msg="CreateContainer within sandbox \"1fccc317c3bce328d5afca37585b060ef98028d9fbdc8ea752520c0ec9dba54a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dbed4c2a713335f945e8ebe82c6e46512a1be49224ef50b97218adf5225b5578\"" Dec 13 03:54:42.841991 env[1144]: time="2024-12-13T03:54:42.841926954Z" level=info msg="StartContainer for \"dbed4c2a713335f945e8ebe82c6e46512a1be49224ef50b97218adf5225b5578\"" Dec 13 03:54:42.878510 systemd[1]: Started cri-containerd-dbed4c2a713335f945e8ebe82c6e46512a1be49224ef50b97218adf5225b5578.scope. Dec 13 03:54:42.916926 systemd[1]: cri-containerd-dbed4c2a713335f945e8ebe82c6e46512a1be49224ef50b97218adf5225b5578.scope: Deactivated successfully. Dec 13 03:54:42.918918 env[1144]: time="2024-12-13T03:54:42.918853864Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79f32437_377d_4486_83ec_18e83118c455.slice/cri-containerd-dbed4c2a713335f945e8ebe82c6e46512a1be49224ef50b97218adf5225b5578.scope/memory.events\": no such file or directory" Dec 13 03:54:42.922620 env[1144]: time="2024-12-13T03:54:42.922580857Z" level=info msg="StartContainer for \"dbed4c2a713335f945e8ebe82c6e46512a1be49224ef50b97218adf5225b5578\" returns successfully" Dec 13 03:54:42.949926 env[1144]: time="2024-12-13T03:54:42.949867430Z" level=info msg="shim disconnected" id=dbed4c2a713335f945e8ebe82c6e46512a1be49224ef50b97218adf5225b5578 Dec 13 03:54:42.949926 env[1144]: time="2024-12-13T03:54:42.949919288Z" level=warning msg="cleaning up after shim disconnected" id=dbed4c2a713335f945e8ebe82c6e46512a1be49224ef50b97218adf5225b5578 namespace=k8s.io Dec 13 03:54:42.950167 env[1144]: time="2024-12-13T03:54:42.949931931Z" level=info msg="cleaning up dead shim" Dec 13 03:54:42.960507 env[1144]: time="2024-12-13T03:54:42.960451338Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:54:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4125 runtime=io.containerd.runc.v2\n" Dec 13 03:54:43.266802 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbed4c2a713335f945e8ebe82c6e46512a1be49224ef50b97218adf5225b5578-rootfs.mount: Deactivated successfully. Dec 13 03:54:43.806602 env[1144]: time="2024-12-13T03:54:43.806495743Z" level=info msg="CreateContainer within sandbox \"1fccc317c3bce328d5afca37585b060ef98028d9fbdc8ea752520c0ec9dba54a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 03:54:44.683978 env[1144]: time="2024-12-13T03:54:44.683866663Z" level=info msg="CreateContainer within sandbox \"1fccc317c3bce328d5afca37585b060ef98028d9fbdc8ea752520c0ec9dba54a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a5e53eff43e3582c3398a71e4a68e8a33b058852cc0486e391165e7d5510f91f\"" Dec 13 03:54:44.685780 env[1144]: time="2024-12-13T03:54:44.685707783Z" level=info msg="StartContainer for \"a5e53eff43e3582c3398a71e4a68e8a33b058852cc0486e391165e7d5510f91f\"" Dec 13 03:54:44.761179 systemd[1]: run-containerd-runc-k8s.io-a5e53eff43e3582c3398a71e4a68e8a33b058852cc0486e391165e7d5510f91f-runc.cWaeTL.mount: Deactivated successfully. Dec 13 03:54:44.768508 systemd[1]: Started cri-containerd-a5e53eff43e3582c3398a71e4a68e8a33b058852cc0486e391165e7d5510f91f.scope. Dec 13 03:54:44.995321 env[1144]: time="2024-12-13T03:54:44.994925337Z" level=info msg="StartContainer for \"a5e53eff43e3582c3398a71e4a68e8a33b058852cc0486e391165e7d5510f91f\" returns successfully" Dec 13 03:54:46.798187 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 03:54:46.867160 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Dec 13 03:54:47.031867 env[1144]: time="2024-12-13T03:54:47.031780411Z" level=info msg="StopPodSandbox for \"68dd1c897fe1c6b44e88e884776098b7ee37b4c9a716c3cf401bfa2321a108f0\"" Dec 13 03:54:47.032795 env[1144]: time="2024-12-13T03:54:47.032672867Z" level=info msg="TearDown network for sandbox \"68dd1c897fe1c6b44e88e884776098b7ee37b4c9a716c3cf401bfa2321a108f0\" successfully" Dec 13 03:54:47.032795 env[1144]: time="2024-12-13T03:54:47.032722911Z" level=info msg="StopPodSandbox for \"68dd1c897fe1c6b44e88e884776098b7ee37b4c9a716c3cf401bfa2321a108f0\" returns successfully" Dec 13 03:54:47.035945 env[1144]: time="2024-12-13T03:54:47.034390846Z" level=info msg="RemovePodSandbox for \"68dd1c897fe1c6b44e88e884776098b7ee37b4c9a716c3cf401bfa2321a108f0\"" Dec 13 03:54:47.035945 env[1144]: time="2024-12-13T03:54:47.034436201Z" level=info msg="Forcibly stopping sandbox \"68dd1c897fe1c6b44e88e884776098b7ee37b4c9a716c3cf401bfa2321a108f0\"" Dec 13 03:54:47.035945 env[1144]: time="2024-12-13T03:54:47.034539325Z" level=info msg="TearDown network for sandbox \"68dd1c897fe1c6b44e88e884776098b7ee37b4c9a716c3cf401bfa2321a108f0\" successfully" Dec 13 03:54:47.042851 env[1144]: time="2024-12-13T03:54:47.042730244Z" level=info msg="RemovePodSandbox \"68dd1c897fe1c6b44e88e884776098b7ee37b4c9a716c3cf401bfa2321a108f0\" returns successfully" Dec 13 03:54:47.046402 env[1144]: time="2024-12-13T03:54:47.045724339Z" level=info msg="StopPodSandbox for \"b821f703eb56acbede8588a79f52aa2f1dd91fe6f3120921e99b0286d7ef3bcb\"" Dec 13 03:54:47.046402 env[1144]: time="2024-12-13T03:54:47.045865475Z" level=info msg="TearDown network for sandbox \"b821f703eb56acbede8588a79f52aa2f1dd91fe6f3120921e99b0286d7ef3bcb\" successfully" Dec 13 03:54:47.048180 env[1144]: time="2024-12-13T03:54:47.048084213Z" level=info msg="StopPodSandbox for \"b821f703eb56acbede8588a79f52aa2f1dd91fe6f3120921e99b0286d7ef3bcb\" returns successfully" Dec 13 03:54:47.051066 env[1144]: time="2024-12-13T03:54:47.050846754Z" level=info msg="RemovePodSandbox for \"b821f703eb56acbede8588a79f52aa2f1dd91fe6f3120921e99b0286d7ef3bcb\"" Dec 13 03:54:47.051706 env[1144]: time="2024-12-13T03:54:47.051631027Z" level=info msg="Forcibly stopping sandbox \"b821f703eb56acbede8588a79f52aa2f1dd91fe6f3120921e99b0286d7ef3bcb\"" Dec 13 03:54:47.052169 env[1144]: time="2024-12-13T03:54:47.052144162Z" level=info msg="TearDown network for sandbox \"b821f703eb56acbede8588a79f52aa2f1dd91fe6f3120921e99b0286d7ef3bcb\" successfully" Dec 13 03:54:47.066027 env[1144]: time="2024-12-13T03:54:47.065915786Z" level=info msg="RemovePodSandbox \"b821f703eb56acbede8588a79f52aa2f1dd91fe6f3120921e99b0286d7ef3bcb\" returns successfully" Dec 13 03:54:47.068086 env[1144]: time="2024-12-13T03:54:47.068040268Z" level=info msg="StopPodSandbox for \"77cd04e37da65d63da702fc91d70b7b07d6cdaeacc03c1096080fceb30861358\"" Dec 13 03:54:47.068291 env[1144]: time="2024-12-13T03:54:47.068188006Z" level=info msg="TearDown network for sandbox \"77cd04e37da65d63da702fc91d70b7b07d6cdaeacc03c1096080fceb30861358\" successfully" Dec 13 03:54:47.068291 env[1144]: time="2024-12-13T03:54:47.068235405Z" level=info msg="StopPodSandbox for \"77cd04e37da65d63da702fc91d70b7b07d6cdaeacc03c1096080fceb30861358\" returns successfully" Dec 13 03:54:47.068846 env[1144]: time="2024-12-13T03:54:47.068644473Z" level=info msg="RemovePodSandbox for \"77cd04e37da65d63da702fc91d70b7b07d6cdaeacc03c1096080fceb30861358\"" Dec 13 03:54:47.068846 env[1144]: time="2024-12-13T03:54:47.068673949Z" level=info msg="Forcibly stopping sandbox \"77cd04e37da65d63da702fc91d70b7b07d6cdaeacc03c1096080fceb30861358\"" Dec 13 03:54:47.068846 env[1144]: time="2024-12-13T03:54:47.068740745Z" level=info msg="TearDown network for sandbox \"77cd04e37da65d63da702fc91d70b7b07d6cdaeacc03c1096080fceb30861358\" successfully" Dec 13 03:54:47.072501 env[1144]: time="2024-12-13T03:54:47.072446667Z" level=info msg="RemovePodSandbox \"77cd04e37da65d63da702fc91d70b7b07d6cdaeacc03c1096080fceb30861358\" returns successfully" Dec 13 03:54:47.891853 systemd[1]: run-containerd-runc-k8s.io-a5e53eff43e3582c3398a71e4a68e8a33b058852cc0486e391165e7d5510f91f-runc.8irJtF.mount: Deactivated successfully. Dec 13 03:54:50.046271 systemd-networkd[976]: lxc_health: Link UP Dec 13 03:54:50.206133 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 03:54:50.206150 systemd-networkd[976]: lxc_health: Gained carrier Dec 13 03:54:50.674782 systemd[1]: run-containerd-runc-k8s.io-a5e53eff43e3582c3398a71e4a68e8a33b058852cc0486e391165e7d5510f91f-runc.rno7wN.mount: Deactivated successfully. Dec 13 03:54:51.451574 kubelet[1973]: I1213 03:54:51.451511 1973 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gtkzn" podStartSLOduration=13.450336109 podStartE2EDuration="13.450336109s" podCreationTimestamp="2024-12-13 03:54:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 03:54:46.843347476 +0000 UTC m=+180.019515697" watchObservedRunningTime="2024-12-13 03:54:51.450336109 +0000 UTC m=+184.626504341" Dec 13 03:54:52.164503 systemd-networkd[976]: lxc_health: Gained IPv6LL Dec 13 03:54:52.883920 systemd[1]: run-containerd-runc-k8s.io-a5e53eff43e3582c3398a71e4a68e8a33b058852cc0486e391165e7d5510f91f-runc.LnR7Gm.mount: Deactivated successfully. Dec 13 03:54:55.070289 systemd[1]: run-containerd-runc-k8s.io-a5e53eff43e3582c3398a71e4a68e8a33b058852cc0486e391165e7d5510f91f-runc.5VU12T.mount: Deactivated successfully. Dec 13 03:54:57.302415 systemd[1]: run-containerd-runc-k8s.io-a5e53eff43e3582c3398a71e4a68e8a33b058852cc0486e391165e7d5510f91f-runc.yV3z0N.mount: Deactivated successfully. Dec 13 03:54:57.581676 sshd[3809]: pam_unix(sshd:session): session closed for user core Dec 13 03:54:57.587097 systemd[1]: sshd@25-172.24.4.174:22-172.24.4.1:50334.service: Deactivated successfully. Dec 13 03:54:57.588719 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 03:54:57.590183 systemd-logind[1131]: Session 26 logged out. Waiting for processes to exit. Dec 13 03:54:57.592974 systemd-logind[1131]: Removed session 26.