Dec 13 03:52:06.036321 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 03:52:06.036357 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 03:52:06.036379 kernel: BIOS-provided physical RAM map: Dec 13 03:52:06.036392 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 03:52:06.036405 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 03:52:06.036417 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 03:52:06.036432 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Dec 13 03:52:06.036445 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Dec 13 03:52:06.036461 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 03:52:06.036473 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 03:52:06.036485 kernel: NX (Execute Disable) protection: active Dec 13 03:52:06.036497 kernel: SMBIOS 2.8 present. Dec 13 03:52:06.036509 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Dec 13 03:52:06.036522 kernel: Hypervisor detected: KVM Dec 13 03:52:06.036537 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 03:52:06.036553 kernel: kvm-clock: cpu 0, msr 4a19b001, primary cpu clock Dec 13 03:52:06.036566 kernel: kvm-clock: using sched offset of 6946650870 cycles Dec 13 03:52:06.036580 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 03:52:06.036595 kernel: tsc: Detected 1996.249 MHz processor Dec 13 03:52:06.036609 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 03:52:06.036623 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 03:52:06.036637 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 13 03:52:06.036651 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 03:52:06.036669 kernel: ACPI: Early table checksum verification disabled Dec 13 03:52:06.036682 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Dec 13 03:52:06.036696 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 03:52:06.036710 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 03:52:06.036724 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 03:52:06.036739 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 13 03:52:06.036760 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 03:52:06.036780 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 03:52:06.036802 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Dec 13 03:52:06.036830 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Dec 13 03:52:06.036851 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 13 03:52:06.036865 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Dec 13 03:52:06.036879 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Dec 13 03:52:06.036892 kernel: No NUMA configuration found Dec 13 03:52:06.036906 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Dec 13 03:52:06.036920 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Dec 13 03:52:06.036934 kernel: Zone ranges: Dec 13 03:52:06.036957 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 03:52:06.040034 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Dec 13 03:52:06.040054 kernel: Normal empty Dec 13 03:52:06.040070 kernel: Movable zone start for each node Dec 13 03:52:06.040081 kernel: Early memory node ranges Dec 13 03:52:06.040090 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 03:52:06.040105 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Dec 13 03:52:06.040114 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Dec 13 03:52:06.040124 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 03:52:06.040133 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 03:52:06.040142 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Dec 13 03:52:06.040149 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 03:52:06.040157 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 03:52:06.040165 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 03:52:06.040172 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 03:52:06.040182 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 03:52:06.040189 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 03:52:06.040197 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 03:52:06.040205 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 03:52:06.040212 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 03:52:06.040221 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 03:52:06.040228 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 13 03:52:06.040236 kernel: Booting paravirtualized kernel on KVM Dec 13 03:52:06.040244 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 03:52:06.040252 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 03:52:06.040262 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 03:52:06.040270 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 03:52:06.040277 kernel: pcpu-alloc: [0] 0 1 Dec 13 03:52:06.040285 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Dec 13 03:52:06.040292 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 03:52:06.040300 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Dec 13 03:52:06.040307 kernel: Policy zone: DMA32 Dec 13 03:52:06.040316 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 03:52:06.040327 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 03:52:06.040334 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 03:52:06.040342 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 03:52:06.040350 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 03:52:06.040358 kernel: Memory: 1973284K/2096620K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 123076K reserved, 0K cma-reserved) Dec 13 03:52:06.040366 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 03:52:06.040373 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 03:52:06.040381 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 03:52:06.040390 kernel: rcu: Hierarchical RCU implementation. Dec 13 03:52:06.040398 kernel: rcu: RCU event tracing is enabled. Dec 13 03:52:06.040406 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 03:52:06.040414 kernel: Rude variant of Tasks RCU enabled. Dec 13 03:52:06.040422 kernel: Tracing variant of Tasks RCU enabled. Dec 13 03:52:06.040429 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 03:52:06.040437 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 03:52:06.040445 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 03:52:06.040453 kernel: Console: colour VGA+ 80x25 Dec 13 03:52:06.040462 kernel: printk: console [tty0] enabled Dec 13 03:52:06.040469 kernel: printk: console [ttyS0] enabled Dec 13 03:52:06.040477 kernel: ACPI: Core revision 20210730 Dec 13 03:52:06.040485 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 03:52:06.040492 kernel: x2apic enabled Dec 13 03:52:06.040500 kernel: Switched APIC routing to physical x2apic. Dec 13 03:52:06.040507 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 03:52:06.040515 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 03:52:06.040523 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Dec 13 03:52:06.040531 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 03:52:06.040540 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 03:52:06.040548 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 03:52:06.040556 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 03:52:06.040564 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 03:52:06.040571 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 03:52:06.040579 kernel: Speculative Store Bypass: Vulnerable Dec 13 03:52:06.040587 kernel: x86/fpu: x87 FPU will use FXSAVE Dec 13 03:52:06.040594 kernel: Freeing SMP alternatives memory: 32K Dec 13 03:52:06.040602 kernel: pid_max: default: 32768 minimum: 301 Dec 13 03:52:06.040611 kernel: LSM: Security Framework initializing Dec 13 03:52:06.040618 kernel: SELinux: Initializing. Dec 13 03:52:06.040626 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 03:52:06.040634 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 03:52:06.040642 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Dec 13 03:52:06.040649 kernel: Performance Events: AMD PMU driver. Dec 13 03:52:06.040657 kernel: ... version: 0 Dec 13 03:52:06.040664 kernel: ... bit width: 48 Dec 13 03:52:06.040672 kernel: ... generic registers: 4 Dec 13 03:52:06.040686 kernel: ... value mask: 0000ffffffffffff Dec 13 03:52:06.040694 kernel: ... max period: 00007fffffffffff Dec 13 03:52:06.040704 kernel: ... fixed-purpose events: 0 Dec 13 03:52:06.040712 kernel: ... event mask: 000000000000000f Dec 13 03:52:06.040720 kernel: signal: max sigframe size: 1440 Dec 13 03:52:06.040728 kernel: rcu: Hierarchical SRCU implementation. Dec 13 03:52:06.040736 kernel: smp: Bringing up secondary CPUs ... Dec 13 03:52:06.040744 kernel: x86: Booting SMP configuration: Dec 13 03:52:06.040753 kernel: .... node #0, CPUs: #1 Dec 13 03:52:06.040762 kernel: kvm-clock: cpu 1, msr 4a19b041, secondary cpu clock Dec 13 03:52:06.040770 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Dec 13 03:52:06.040778 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 03:52:06.040786 kernel: smpboot: Max logical packages: 2 Dec 13 03:52:06.040794 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Dec 13 03:52:06.040802 kernel: devtmpfs: initialized Dec 13 03:52:06.040809 kernel: x86/mm: Memory block size: 128MB Dec 13 03:52:06.040818 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 03:52:06.040829 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 03:52:06.040837 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 03:52:06.040845 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 03:52:06.040853 kernel: audit: initializing netlink subsys (disabled) Dec 13 03:52:06.040861 kernel: audit: type=2000 audit(1734061924.744:1): state=initialized audit_enabled=0 res=1 Dec 13 03:52:06.040869 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 03:52:06.040876 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 03:52:06.040884 kernel: cpuidle: using governor menu Dec 13 03:52:06.040892 kernel: ACPI: bus type PCI registered Dec 13 03:52:06.040902 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 03:52:06.040910 kernel: dca service started, version 1.12.1 Dec 13 03:52:06.040918 kernel: PCI: Using configuration type 1 for base access Dec 13 03:52:06.040926 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 03:52:06.040934 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 03:52:06.040942 kernel: ACPI: Added _OSI(Module Device) Dec 13 03:52:06.040951 kernel: ACPI: Added _OSI(Processor Device) Dec 13 03:52:06.040959 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 03:52:06.040978 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 03:52:06.040989 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 03:52:06.040997 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 03:52:06.041005 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 03:52:06.041013 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 03:52:06.041021 kernel: ACPI: Interpreter enabled Dec 13 03:52:06.041030 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 03:52:06.041038 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 03:52:06.041046 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 03:52:06.041054 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 13 03:52:06.041065 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 03:52:06.041201 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 03:52:06.041288 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 03:52:06.041301 kernel: acpiphp: Slot [3] registered Dec 13 03:52:06.041309 kernel: acpiphp: Slot [4] registered Dec 13 03:52:06.041318 kernel: acpiphp: Slot [5] registered Dec 13 03:52:06.041325 kernel: acpiphp: Slot [6] registered Dec 13 03:52:06.041337 kernel: acpiphp: Slot [7] registered Dec 13 03:52:06.041345 kernel: acpiphp: Slot [8] registered Dec 13 03:52:06.041353 kernel: acpiphp: Slot [9] registered Dec 13 03:52:06.041360 kernel: acpiphp: Slot [10] registered Dec 13 03:52:06.041368 kernel: acpiphp: Slot [11] registered Dec 13 03:52:06.041376 kernel: acpiphp: Slot [12] registered Dec 13 03:52:06.041384 kernel: acpiphp: Slot [13] registered Dec 13 03:52:06.041392 kernel: acpiphp: Slot [14] registered Dec 13 03:52:06.041400 kernel: acpiphp: Slot [15] registered Dec 13 03:52:06.041409 kernel: acpiphp: Slot [16] registered Dec 13 03:52:06.041419 kernel: acpiphp: Slot [17] registered Dec 13 03:52:06.041427 kernel: acpiphp: Slot [18] registered Dec 13 03:52:06.041435 kernel: acpiphp: Slot [19] registered Dec 13 03:52:06.041443 kernel: acpiphp: Slot [20] registered Dec 13 03:52:06.041450 kernel: acpiphp: Slot [21] registered Dec 13 03:52:06.041459 kernel: acpiphp: Slot [22] registered Dec 13 03:52:06.041466 kernel: acpiphp: Slot [23] registered Dec 13 03:52:06.041474 kernel: acpiphp: Slot [24] registered Dec 13 03:52:06.041482 kernel: acpiphp: Slot [25] registered Dec 13 03:52:06.041492 kernel: acpiphp: Slot [26] registered Dec 13 03:52:06.041500 kernel: acpiphp: Slot [27] registered Dec 13 03:52:06.041508 kernel: acpiphp: Slot [28] registered Dec 13 03:52:06.041516 kernel: acpiphp: Slot [29] registered Dec 13 03:52:06.041523 kernel: acpiphp: Slot [30] registered Dec 13 03:52:06.041531 kernel: acpiphp: Slot [31] registered Dec 13 03:52:06.041539 kernel: PCI host bridge to bus 0000:00 Dec 13 03:52:06.041636 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 03:52:06.041713 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 03:52:06.041792 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 03:52:06.041873 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 03:52:06.041947 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 13 03:52:06.042040 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 03:52:06.042140 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 03:52:06.042235 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 03:52:06.042338 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 13 03:52:06.042422 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Dec 13 03:52:06.042506 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 03:52:06.042589 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 03:52:06.042673 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 03:52:06.042762 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 03:52:06.042858 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 03:52:06.042953 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 13 03:52:06.046101 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 13 03:52:06.046196 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 13 03:52:06.046279 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 13 03:52:06.046360 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 13 03:52:06.046442 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Dec 13 03:52:06.046526 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Dec 13 03:52:06.046606 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 03:52:06.046701 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 03:52:06.046782 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Dec 13 03:52:06.046864 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Dec 13 03:52:06.046987 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 13 03:52:06.047076 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Dec 13 03:52:06.047171 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 03:52:06.047267 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 03:52:06.047349 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Dec 13 03:52:06.047430 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 13 03:52:06.047519 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Dec 13 03:52:06.047602 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Dec 13 03:52:06.047692 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 13 03:52:06.047793 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 03:52:06.047876 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Dec 13 03:52:06.047959 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 13 03:52:06.047985 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 03:52:06.047993 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 03:52:06.048002 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 03:52:06.048010 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 03:52:06.048018 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 03:52:06.048029 kernel: iommu: Default domain type: Translated Dec 13 03:52:06.048037 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 03:52:06.048123 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 13 03:52:06.048205 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 03:52:06.048288 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 13 03:52:06.048300 kernel: vgaarb: loaded Dec 13 03:52:06.048308 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 03:52:06.048316 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 03:52:06.048325 kernel: PTP clock support registered Dec 13 03:52:06.048336 kernel: PCI: Using ACPI for IRQ routing Dec 13 03:52:06.048344 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 03:52:06.048351 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 03:52:06.048359 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Dec 13 03:52:06.048368 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 03:52:06.048376 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 03:52:06.048384 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 03:52:06.048392 kernel: pnp: PnP ACPI init Dec 13 03:52:06.048478 kernel: pnp 00:03: [dma 2] Dec 13 03:52:06.048493 kernel: pnp: PnP ACPI: found 5 devices Dec 13 03:52:06.048502 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 03:52:06.048510 kernel: NET: Registered PF_INET protocol family Dec 13 03:52:06.048518 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 03:52:06.048526 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 03:52:06.048534 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 03:52:06.048543 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 03:52:06.048551 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 03:52:06.048561 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 03:52:06.048569 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 03:52:06.048577 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 03:52:06.048585 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 03:52:06.048593 kernel: NET: Registered PF_XDP protocol family Dec 13 03:52:06.048667 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 03:52:06.048744 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 03:52:06.048817 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 03:52:06.048887 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 03:52:06.048976 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 13 03:52:06.049067 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 13 03:52:06.049151 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 03:52:06.049234 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Dec 13 03:52:06.049245 kernel: PCI: CLS 0 bytes, default 64 Dec 13 03:52:06.049254 kernel: Initialise system trusted keyrings Dec 13 03:52:06.049262 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 03:52:06.049273 kernel: Key type asymmetric registered Dec 13 03:52:06.049281 kernel: Asymmetric key parser 'x509' registered Dec 13 03:52:06.049289 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 03:52:06.049297 kernel: io scheduler mq-deadline registered Dec 13 03:52:06.049305 kernel: io scheduler kyber registered Dec 13 03:52:06.049313 kernel: io scheduler bfq registered Dec 13 03:52:06.049321 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 03:52:06.049330 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 13 03:52:06.049338 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 03:52:06.049346 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 03:52:06.049356 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 03:52:06.049364 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 03:52:06.049372 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 03:52:06.049380 kernel: random: crng init done Dec 13 03:52:06.049388 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 03:52:06.049396 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 03:52:06.049404 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 03:52:06.049494 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 03:52:06.049512 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 03:52:06.049587 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 03:52:06.049662 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T03:52:05 UTC (1734061925) Dec 13 03:52:06.049735 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 13 03:52:06.049747 kernel: NET: Registered PF_INET6 protocol family Dec 13 03:52:06.049755 kernel: Segment Routing with IPv6 Dec 13 03:52:06.049763 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 03:52:06.049771 kernel: NET: Registered PF_PACKET protocol family Dec 13 03:52:06.049779 kernel: Key type dns_resolver registered Dec 13 03:52:06.049790 kernel: IPI shorthand broadcast: enabled Dec 13 03:52:06.049799 kernel: sched_clock: Marking stable (728706566, 120656809)->(873504798, -24141423) Dec 13 03:52:06.049807 kernel: registered taskstats version 1 Dec 13 03:52:06.049815 kernel: Loading compiled-in X.509 certificates Dec 13 03:52:06.049823 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 03:52:06.049831 kernel: Key type .fscrypt registered Dec 13 03:52:06.049839 kernel: Key type fscrypt-provisioning registered Dec 13 03:52:06.049847 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 03:52:06.049858 kernel: ima: Allocated hash algorithm: sha1 Dec 13 03:52:06.049866 kernel: ima: No architecture policies found Dec 13 03:52:06.049873 kernel: clk: Disabling unused clocks Dec 13 03:52:06.049881 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 03:52:06.049889 kernel: Write protecting the kernel read-only data: 28672k Dec 13 03:52:06.049897 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 03:52:06.049905 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 03:52:06.049913 kernel: Run /init as init process Dec 13 03:52:06.049921 kernel: with arguments: Dec 13 03:52:06.049931 kernel: /init Dec 13 03:52:06.049939 kernel: with environment: Dec 13 03:52:06.049946 kernel: HOME=/ Dec 13 03:52:06.049954 kernel: TERM=linux Dec 13 03:52:06.049962 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 03:52:06.055015 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 03:52:06.055029 systemd[1]: Detected virtualization kvm. Dec 13 03:52:06.055039 systemd[1]: Detected architecture x86-64. Dec 13 03:52:06.055067 systemd[1]: Running in initrd. Dec 13 03:52:06.055077 systemd[1]: No hostname configured, using default hostname. Dec 13 03:52:06.055086 systemd[1]: Hostname set to . Dec 13 03:52:06.055096 systemd[1]: Initializing machine ID from VM UUID. Dec 13 03:52:06.055105 systemd[1]: Queued start job for default target initrd.target. Dec 13 03:52:06.055115 systemd[1]: Started systemd-ask-password-console.path. Dec 13 03:52:06.055124 systemd[1]: Reached target cryptsetup.target. Dec 13 03:52:06.055133 systemd[1]: Reached target paths.target. Dec 13 03:52:06.055144 systemd[1]: Reached target slices.target. Dec 13 03:52:06.055154 systemd[1]: Reached target swap.target. Dec 13 03:52:06.055165 systemd[1]: Reached target timers.target. Dec 13 03:52:06.055174 systemd[1]: Listening on iscsid.socket. Dec 13 03:52:06.055183 systemd[1]: Listening on iscsiuio.socket. Dec 13 03:52:06.055192 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 03:52:06.055201 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 03:52:06.055210 systemd[1]: Listening on systemd-journald.socket. Dec 13 03:52:06.055221 systemd[1]: Listening on systemd-networkd.socket. Dec 13 03:52:06.055229 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 03:52:06.055247 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 03:52:06.055256 systemd[1]: Reached target sockets.target. Dec 13 03:52:06.055276 systemd[1]: Starting kmod-static-nodes.service... Dec 13 03:52:06.055287 systemd[1]: Finished network-cleanup.service. Dec 13 03:52:06.055298 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 03:52:06.055307 systemd[1]: Starting systemd-journald.service... Dec 13 03:52:06.055316 systemd[1]: Starting systemd-modules-load.service... Dec 13 03:52:06.055325 systemd[1]: Starting systemd-resolved.service... Dec 13 03:52:06.055334 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 03:52:06.055343 systemd[1]: Finished kmod-static-nodes.service. Dec 13 03:52:06.055352 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 03:52:06.055364 systemd-journald[185]: Journal started Dec 13 03:52:06.055436 systemd-journald[185]: Runtime Journal (/run/log/journal/fe3dd3fe4f48463f94a8d15466136c68) is 4.9M, max 39.5M, 34.5M free. Dec 13 03:52:06.016510 systemd-modules-load[186]: Inserted module 'overlay' Dec 13 03:52:06.080473 systemd[1]: Started systemd-journald.service. Dec 13 03:52:06.080557 kernel: audit: type=1130 audit(1734061926.066:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:06.080573 kernel: audit: type=1130 audit(1734061926.068:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:06.080585 kernel: audit: type=1130 audit(1734061926.068:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:06.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:06.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:06.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:06.062635 systemd-resolved[187]: Positive Trust Anchors: Dec 13 03:52:06.084948 kernel: audit: type=1130 audit(1734061926.080:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:06.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:06.062645 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 03:52:06.062680 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 03:52:06.094608 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 03:52:06.065403 systemd-resolved[187]: Defaulting to hostname 'linux'. Dec 13 03:52:06.106620 kernel: Bridge firewalling registered Dec 13 03:52:06.106655 kernel: audit: type=1130 audit(1734061926.101:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:06.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:06.069328 systemd[1]: Started systemd-resolved.service. Dec 13 03:52:06.073281 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 03:52:06.081162 systemd[1]: Reached target nss-lookup.target. Dec 13 03:52:06.086433 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 03:52:06.087672 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 03:52:06.098374 systemd-modules-load[186]: Inserted module 'br_netfilter' Dec 13 03:52:06.102165 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 03:52:06.121745 kernel: audit: type=1130 audit(1734061926.113:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:06.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:06.114008 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 03:52:06.115278 systemd[1]: Starting dracut-cmdline.service... Dec 13 03:52:06.125998 dracut-cmdline[202]: dracut-dracut-053 Dec 13 03:52:06.129007 kernel: SCSI subsystem initialized Dec 13 03:52:06.131103 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 03:52:06.147390 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 03:52:06.150460 kernel: device-mapper: uevent: version 1.0.3 Dec 13 03:52:06.150532 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 03:52:06.156293 systemd-modules-load[186]: Inserted module 'dm_multipath' Dec 13 03:52:06.157184 systemd[1]: Finished systemd-modules-load.service. Dec 13 03:52:06.158915 systemd[1]: Starting systemd-sysctl.service... Dec 13 03:52:06.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:06.164032 kernel: audit: type=1130 audit(1734061926.157:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:06.171119 systemd[1]: Finished systemd-sysctl.service. Dec 13 03:52:06.175414 kernel: audit: type=1130 audit(1734061926.170:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:06.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:06.210014 kernel: Loading iSCSI transport class v2.0-870. Dec 13 03:52:06.230025 kernel: iscsi: registered transport (tcp) Dec 13 03:52:06.256390 kernel: iscsi: registered transport (qla4xxx) Dec 13 03:52:06.256453 kernel: QLogic iSCSI HBA Driver Dec 13 03:52:06.307907 systemd[1]: Finished dracut-cmdline.service. Dec 13 03:52:06.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:06.309433 systemd[1]: Starting dracut-pre-udev.service... Dec 13 03:52:06.315208 kernel: audit: type=1130 audit(1734061926.307:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:06.386091 kernel: raid6: sse2x4 gen() 12625 MB/s Dec 13 03:52:06.403065 kernel: raid6: sse2x4 xor() 4942 MB/s Dec 13 03:52:06.420042 kernel: raid6: sse2x2 gen() 13551 MB/s Dec 13 03:52:06.437046 kernel: raid6: sse2x2 xor() 8715 MB/s Dec 13 03:52:06.454017 kernel: raid6: sse2x1 gen() 11068 MB/s Dec 13 03:52:06.471721 kernel: raid6: sse2x1 xor() 6955 MB/s Dec 13 03:52:06.471794 kernel: raid6: using algorithm sse2x2 gen() 13551 MB/s Dec 13 03:52:06.471822 kernel: raid6: .... xor() 8715 MB/s, rmw enabled Dec 13 03:52:06.472614 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 03:52:06.509043 kernel: xor: measuring software checksum speed Dec 13 03:52:06.509134 kernel: prefetch64-sse : 5980 MB/sec Dec 13 03:52:06.511863 kernel: generic_sse : 6687 MB/sec Dec 13 03:52:06.511889 kernel: xor: using function: generic_sse (6687 MB/sec) Dec 13 03:52:06.650067 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 03:52:06.665623 systemd[1]: Finished dracut-pre-udev.service. Dec 13 03:52:06.667213 systemd[1]: Starting systemd-udevd.service... Dec 13 03:52:06.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:06.666000 audit: BPF prog-id=7 op=LOAD Dec 13 03:52:06.666000 audit: BPF prog-id=8 op=LOAD Dec 13 03:52:06.704843 systemd-udevd[384]: Using default interface naming scheme 'v252'. Dec 13 03:52:06.716738 systemd[1]: Started systemd-udevd.service. Dec 13 03:52:06.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:06.726842 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 03:52:06.754807 dracut-pre-trigger[396]: rd.md=0: removing MD RAID activation Dec 13 03:52:06.821259 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 03:52:06.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:06.824756 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 03:52:06.891089 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 03:52:06.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:06.964997 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Dec 13 03:52:07.029167 kernel: libata version 3.00 loaded. Dec 13 03:52:07.029188 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 13 03:52:07.029316 kernel: scsi host0: ata_piix Dec 13 03:52:07.029481 kernel: scsi host1: ata_piix Dec 13 03:52:07.029618 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Dec 13 03:52:07.029631 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Dec 13 03:52:07.029643 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 03:52:07.029658 kernel: GPT:17805311 != 41943039 Dec 13 03:52:07.029668 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 03:52:07.029679 kernel: GPT:17805311 != 41943039 Dec 13 03:52:07.029689 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 03:52:07.029699 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 03:52:07.322048 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (431) Dec 13 03:52:07.346749 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 03:52:07.517144 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 03:52:07.554841 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 03:52:07.562882 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 03:52:07.564294 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 03:52:07.569399 systemd[1]: Starting disk-uuid.service... Dec 13 03:52:07.622850 disk-uuid[460]: Primary Header is updated. Dec 13 03:52:07.622850 disk-uuid[460]: Secondary Entries is updated. Dec 13 03:52:07.622850 disk-uuid[460]: Secondary Header is updated. Dec 13 03:52:07.634420 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 03:52:07.645071 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 03:52:08.727023 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 03:52:08.727762 disk-uuid[461]: The operation has completed successfully. Dec 13 03:52:09.746528 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 03:52:09.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:09.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:09.746738 systemd[1]: Finished disk-uuid.service. Dec 13 03:52:09.749767 systemd[1]: Starting verity-setup.service... Dec 13 03:52:09.883036 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Dec 13 03:52:10.413787 systemd[1]: Found device dev-mapper-usr.device. Dec 13 03:52:10.418048 systemd[1]: Mounting sysusr-usr.mount... Dec 13 03:52:10.424161 systemd[1]: Finished verity-setup.service. Dec 13 03:52:10.438024 kernel: kauditd_printk_skb: 8 callbacks suppressed Dec 13 03:52:10.438077 kernel: audit: type=1130 audit(1734061930.424:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:10.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:10.564005 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 03:52:10.564529 systemd[1]: Mounted sysusr-usr.mount. Dec 13 03:52:10.565567 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 03:52:10.567636 systemd[1]: Starting ignition-setup.service... Dec 13 03:52:10.570110 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 03:52:10.644771 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 03:52:10.644842 kernel: BTRFS info (device vda6): using free space tree Dec 13 03:52:10.644856 kernel: BTRFS info (device vda6): has skinny extents Dec 13 03:52:10.701714 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 03:52:10.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:10.707025 kernel: audit: type=1130 audit(1734061930.701:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:10.706000 audit: BPF prog-id=9 op=LOAD Dec 13 03:52:10.708866 systemd[1]: Starting systemd-networkd.service... Dec 13 03:52:10.709474 kernel: audit: type=1334 audit(1734061930.706:21): prog-id=9 op=LOAD Dec 13 03:52:10.747661 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 03:52:10.770544 systemd[1]: Finished ignition-setup.service. Dec 13 03:52:10.783155 kernel: audit: type=1130 audit(1734061930.773:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:10.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:10.770788 systemd-networkd[624]: lo: Link UP Dec 13 03:52:10.795196 kernel: audit: type=1130 audit(1734061930.782:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:10.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:10.770793 systemd-networkd[624]: lo: Gained carrier Dec 13 03:52:10.771381 systemd-networkd[624]: Enumeration completed Dec 13 03:52:10.771627 systemd-networkd[624]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 03:52:10.772949 systemd-networkd[624]: eth0: Link UP Dec 13 03:52:10.772953 systemd-networkd[624]: eth0: Gained carrier Dec 13 03:52:10.774191 systemd[1]: Started systemd-networkd.service. Dec 13 03:52:10.783588 systemd[1]: Reached target network.target. Dec 13 03:52:10.792620 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 03:52:10.801042 systemd[1]: Starting iscsiuio.service... Dec 13 03:52:10.803299 systemd-networkd[624]: eth0: DHCPv4 address 172.24.4.199/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 03:52:10.813022 systemd[1]: Started iscsiuio.service. Dec 13 03:52:10.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:10.816856 systemd[1]: Starting iscsid.service... Dec 13 03:52:10.823456 kernel: audit: type=1130 audit(1734061930.813:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:10.823481 iscsid[636]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 03:52:10.823481 iscsid[636]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 03:52:10.823481 iscsid[636]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 03:52:10.823481 iscsid[636]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 03:52:10.823481 iscsid[636]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 03:52:10.823481 iscsid[636]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 03:52:10.823481 iscsid[636]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 03:52:10.834198 kernel: audit: type=1130 audit(1734061930.825:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:10.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:10.825675 systemd[1]: Started iscsid.service. Dec 13 03:52:10.831315 systemd[1]: Starting dracut-initqueue.service... Dec 13 03:52:10.843498 systemd[1]: Finished dracut-initqueue.service. Dec 13 03:52:10.844096 systemd[1]: Reached target remote-fs-pre.target. Dec 13 03:52:10.849572 kernel: audit: type=1130 audit(1734061930.843:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:10.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:10.848548 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 03:52:10.849019 systemd[1]: Reached target remote-fs.target. Dec 13 03:52:10.850913 systemd[1]: Starting dracut-pre-mount.service... Dec 13 03:52:10.860679 systemd[1]: Finished dracut-pre-mount.service. Dec 13 03:52:10.865346 kernel: audit: type=1130 audit(1734061930.860:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:10.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:11.274923 ignition[632]: Ignition 2.14.0 Dec 13 03:52:11.276182 ignition[632]: Stage: fetch-offline Dec 13 03:52:11.276344 ignition[632]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:52:11.276391 ignition[632]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 03:52:11.278676 ignition[632]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 03:52:11.278893 ignition[632]: parsed url from cmdline: "" Dec 13 03:52:11.278902 ignition[632]: no config URL provided Dec 13 03:52:11.278915 ignition[632]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 03:52:11.292909 kernel: audit: type=1130 audit(1734061931.282:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:11.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:11.282075 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 03:52:11.278934 ignition[632]: no config at "/usr/lib/ignition/user.ign" Dec 13 03:52:11.285391 systemd[1]: Starting ignition-fetch.service... Dec 13 03:52:11.278957 ignition[632]: failed to fetch config: resource requires networking Dec 13 03:52:11.280344 ignition[632]: Ignition finished successfully Dec 13 03:52:11.304628 ignition[654]: Ignition 2.14.0 Dec 13 03:52:11.304644 ignition[654]: Stage: fetch Dec 13 03:52:11.304881 ignition[654]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:52:11.304923 ignition[654]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 03:52:11.307158 ignition[654]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 03:52:11.307414 ignition[654]: parsed url from cmdline: "" Dec 13 03:52:11.307423 ignition[654]: no config URL provided Dec 13 03:52:11.307437 ignition[654]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 03:52:11.307457 ignition[654]: no config at "/usr/lib/ignition/user.ign" Dec 13 03:52:11.309614 ignition[654]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 03:52:11.315368 ignition[654]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 03:52:11.315421 ignition[654]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 03:52:11.641692 ignition[654]: GET result: OK Dec 13 03:52:11.641827 ignition[654]: parsing config with SHA512: 44a1974e0ec7464b6e16d92e48afff03da5579179192c5438d7219ea8e10ab97cfe0d8afa8bb63c592fec603e1aa476cc4fcd42bcccd5fec599528ac3bcf1626 Dec 13 03:52:11.669135 unknown[654]: fetched base config from "system" Dec 13 03:52:11.669998 ignition[654]: fetch: fetch complete Dec 13 03:52:11.669162 unknown[654]: fetched base config from "system" Dec 13 03:52:11.670012 ignition[654]: fetch: fetch passed Dec 13 03:52:11.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:11.669177 unknown[654]: fetched user config from "openstack" Dec 13 03:52:11.670150 ignition[654]: Ignition finished successfully Dec 13 03:52:11.673507 systemd[1]: Finished ignition-fetch.service. Dec 13 03:52:11.677207 systemd[1]: Starting ignition-kargs.service... Dec 13 03:52:11.699788 ignition[660]: Ignition 2.14.0 Dec 13 03:52:11.699817 ignition[660]: Stage: kargs Dec 13 03:52:11.700127 ignition[660]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:52:11.700174 ignition[660]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 03:52:11.702590 ignition[660]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 03:52:11.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:11.707123 systemd[1]: Finished ignition-kargs.service. Dec 13 03:52:11.704837 ignition[660]: kargs: kargs passed Dec 13 03:52:11.710597 systemd[1]: Starting ignition-disks.service... Dec 13 03:52:11.704934 ignition[660]: Ignition finished successfully Dec 13 03:52:11.730235 ignition[666]: Ignition 2.14.0 Dec 13 03:52:11.730261 ignition[666]: Stage: disks Dec 13 03:52:11.730514 ignition[666]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:52:11.730557 ignition[666]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 03:52:11.732801 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 03:52:11.736331 systemd[1]: Finished ignition-disks.service. Dec 13 03:52:11.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:11.734824 ignition[666]: disks: disks passed Dec 13 03:52:11.739105 systemd[1]: Reached target initrd-root-device.target. Dec 13 03:52:11.734918 ignition[666]: Ignition finished successfully Dec 13 03:52:11.741174 systemd[1]: Reached target local-fs-pre.target. Dec 13 03:52:11.743372 systemd[1]: Reached target local-fs.target. Dec 13 03:52:11.745657 systemd[1]: Reached target sysinit.target. Dec 13 03:52:11.747877 systemd[1]: Reached target basic.target. Dec 13 03:52:11.752017 systemd[1]: Starting systemd-fsck-root.service... Dec 13 03:52:12.341291 systemd-fsck[673]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 03:52:12.598654 systemd-networkd[624]: eth0: Gained IPv6LL Dec 13 03:52:12.812084 systemd[1]: Finished systemd-fsck-root.service. Dec 13 03:52:12.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:12.815651 systemd[1]: Mounting sysroot.mount... Dec 13 03:52:12.913039 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 03:52:12.914766 systemd[1]: Mounted sysroot.mount. Dec 13 03:52:12.917444 systemd[1]: Reached target initrd-root-fs.target. Dec 13 03:52:12.922358 systemd[1]: Mounting sysroot-usr.mount... Dec 13 03:52:12.924309 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 03:52:12.925709 systemd[1]: Starting flatcar-openstack-hostname.service... Dec 13 03:52:12.931076 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 03:52:12.931167 systemd[1]: Reached target ignition-diskful.target. Dec 13 03:52:12.941407 systemd[1]: Mounted sysroot-usr.mount. Dec 13 03:52:12.949905 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 03:52:12.953344 systemd[1]: Starting initrd-setup-root.service... Dec 13 03:52:12.972629 initrd-setup-root[685]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 03:52:12.991045 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (680) Dec 13 03:52:12.998042 initrd-setup-root[693]: cut: /sysroot/etc/group: No such file or directory Dec 13 03:52:13.012076 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 03:52:13.012155 kernel: BTRFS info (device vda6): using free space tree Dec 13 03:52:13.012184 kernel: BTRFS info (device vda6): has skinny extents Dec 13 03:52:13.012211 initrd-setup-root[705]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 03:52:13.021330 initrd-setup-root[725]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 03:52:13.033853 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 03:52:13.096232 systemd[1]: Finished initrd-setup-root.service. Dec 13 03:52:13.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:13.097625 systemd[1]: Starting ignition-mount.service... Dec 13 03:52:13.098655 systemd[1]: Starting sysroot-boot.service... Dec 13 03:52:13.110687 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 03:52:13.110817 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 03:52:13.132831 ignition[748]: INFO : Ignition 2.14.0 Dec 13 03:52:13.132831 ignition[748]: INFO : Stage: mount Dec 13 03:52:13.134130 ignition[748]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:52:13.134130 ignition[748]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 03:52:13.134130 ignition[748]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 03:52:13.136938 ignition[748]: INFO : mount: mount passed Dec 13 03:52:13.136938 ignition[748]: INFO : Ignition finished successfully Dec 13 03:52:13.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:13.138204 systemd[1]: Finished ignition-mount.service. Dec 13 03:52:13.148901 systemd[1]: Finished sysroot-boot.service. Dec 13 03:52:13.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:13.151115 coreos-metadata[679]: Dec 13 03:52:13.151 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 03:52:13.165773 coreos-metadata[679]: Dec 13 03:52:13.165 INFO Fetch successful Dec 13 03:52:13.166575 coreos-metadata[679]: Dec 13 03:52:13.166 INFO wrote hostname ci-3510-3-6-5-153fa2e4c7.novalocal to /sysroot/etc/hostname Dec 13 03:52:13.169704 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 03:52:13.169810 systemd[1]: Finished flatcar-openstack-hostname.service. Dec 13 03:52:13.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:13.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:13.171913 systemd[1]: Starting ignition-files.service... Dec 13 03:52:13.179326 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 03:52:13.188002 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (756) Dec 13 03:52:13.190757 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 03:52:13.190781 kernel: BTRFS info (device vda6): using free space tree Dec 13 03:52:13.190793 kernel: BTRFS info (device vda6): has skinny extents Dec 13 03:52:13.197781 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 03:52:13.208400 ignition[775]: INFO : Ignition 2.14.0 Dec 13 03:52:13.209245 ignition[775]: INFO : Stage: files Dec 13 03:52:13.209829 ignition[775]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:52:13.210649 ignition[775]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 03:52:13.212616 ignition[775]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 03:52:13.214667 ignition[775]: DEBUG : files: compiled without relabeling support, skipping Dec 13 03:52:13.215611 ignition[775]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 03:52:13.215611 ignition[775]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 03:52:13.221390 ignition[775]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 03:52:13.222125 ignition[775]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 03:52:13.222933 unknown[775]: wrote ssh authorized keys file for user: core Dec 13 03:52:13.223605 ignition[775]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 03:52:13.224417 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 03:52:13.225249 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 03:52:13.225249 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 03:52:13.225249 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 03:52:13.225249 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 03:52:13.225249 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 03:52:13.225249 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 03:52:13.234512 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 03:52:13.648931 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 03:52:15.333386 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 03:52:15.333386 ignition[775]: INFO : files: op(7): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 03:52:15.333386 ignition[775]: INFO : files: op(7): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 03:52:15.333386 ignition[775]: INFO : files: op(8): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 03:52:15.341188 ignition[775]: INFO : files: op(8): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 03:52:15.345565 ignition[775]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 03:52:15.345565 ignition[775]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 03:52:15.345565 ignition[775]: INFO : files: files passed Dec 13 03:52:15.345565 ignition[775]: INFO : Ignition finished successfully Dec 13 03:52:15.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.346562 systemd[1]: Finished ignition-files.service. Dec 13 03:52:15.350763 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 03:52:15.355385 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 03:52:15.357391 systemd[1]: Starting ignition-quench.service... Dec 13 03:52:15.364545 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 03:52:15.365438 systemd[1]: Finished ignition-quench.service. Dec 13 03:52:15.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.367663 initrd-setup-root-after-ignition[800]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 03:52:15.368761 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 03:52:15.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.369906 systemd[1]: Reached target ignition-complete.target. Dec 13 03:52:15.371708 systemd[1]: Starting initrd-parse-etc.service... Dec 13 03:52:15.395163 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 03:52:15.395871 systemd[1]: Finished initrd-parse-etc.service. Dec 13 03:52:15.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.397041 systemd[1]: Reached target initrd-fs.target. Dec 13 03:52:15.397933 systemd[1]: Reached target initrd.target. Dec 13 03:52:15.398866 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 03:52:15.400430 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 03:52:15.421252 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 03:52:15.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.423199 systemd[1]: Starting initrd-cleanup.service... Dec 13 03:52:15.438247 systemd[1]: Stopped target nss-lookup.target. Dec 13 03:52:15.439300 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 03:52:15.440343 systemd[1]: Stopped target timers.target. Dec 13 03:52:15.441324 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 03:52:15.441984 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 03:52:15.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.443167 systemd[1]: Stopped target initrd.target. Dec 13 03:52:15.453538 kernel: kauditd_printk_skb: 16 callbacks suppressed Dec 13 03:52:15.453560 kernel: audit: type=1131 audit(1734061935.441:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.454157 systemd[1]: Stopped target basic.target. Dec 13 03:52:15.455176 systemd[1]: Stopped target ignition-complete.target. Dec 13 03:52:15.456248 systemd[1]: Stopped target ignition-diskful.target. Dec 13 03:52:15.457308 systemd[1]: Stopped target initrd-root-device.target. Dec 13 03:52:15.458385 systemd[1]: Stopped target remote-fs.target. Dec 13 03:52:15.459415 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 03:52:15.460464 systemd[1]: Stopped target sysinit.target. Dec 13 03:52:15.461475 systemd[1]: Stopped target local-fs.target. Dec 13 03:52:15.462502 systemd[1]: Stopped target local-fs-pre.target. Dec 13 03:52:15.463561 systemd[1]: Stopped target swap.target. Dec 13 03:52:15.464510 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 03:52:15.465191 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 03:52:15.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.466312 systemd[1]: Stopped target cryptsetup.target. Dec 13 03:52:15.474503 kernel: audit: type=1131 audit(1734061935.465:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.475762 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 03:52:15.476142 systemd[1]: Stopped dracut-initqueue.service. Dec 13 03:52:15.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.477919 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 03:52:15.482631 kernel: audit: type=1131 audit(1734061935.476:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.478331 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 03:52:15.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.484614 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 03:52:15.484742 systemd[1]: Stopped ignition-files.service. Dec 13 03:52:15.494003 kernel: audit: type=1131 audit(1734061935.483:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.500026 iscsid[636]: iscsid shutting down. Dec 13 03:52:15.497298 systemd[1]: Stopping ignition-mount.service... Dec 13 03:52:15.506041 ignition[813]: INFO : Ignition 2.14.0 Dec 13 03:52:15.506802 ignition[813]: INFO : Stage: umount Dec 13 03:52:15.507446 ignition[813]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:52:15.508161 systemd[1]: Stopping iscsid.service... Dec 13 03:52:15.509594 ignition[813]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 03:52:15.527384 kernel: audit: type=1131 audit(1734061935.494:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.527419 kernel: audit: type=1131 audit(1734061935.517:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.527543 ignition[813]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 03:52:15.527543 ignition[813]: INFO : umount: umount passed Dec 13 03:52:15.527543 ignition[813]: INFO : Ignition finished successfully Dec 13 03:52:15.532949 kernel: audit: type=1131 audit(1734061935.526:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.510818 systemd[1]: Stopping sysroot-boot.service... Dec 13 03:52:15.517769 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 03:52:15.518014 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 03:52:15.518630 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 03:52:15.518750 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 03:52:15.529366 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 03:52:15.529487 systemd[1]: Stopped iscsid.service. Dec 13 03:52:15.540280 kernel: audit: type=1131 audit(1734061935.535:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.537200 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 03:52:15.537285 systemd[1]: Stopped ignition-mount.service. Dec 13 03:52:15.552530 kernel: audit: type=1131 audit(1734061935.539:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.552548 kernel: audit: type=1131 audit(1734061935.544:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.541060 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 03:52:15.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.541161 systemd[1]: Stopped ignition-disks.service. Dec 13 03:52:15.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.545420 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 03:52:15.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.545457 systemd[1]: Stopped ignition-kargs.service. Dec 13 03:52:15.552988 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 03:52:15.553029 systemd[1]: Stopped ignition-fetch.service. Dec 13 03:52:15.553850 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 03:52:15.553887 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 03:52:15.554712 systemd[1]: Stopped target paths.target. Dec 13 03:52:15.555570 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 03:52:15.559006 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 03:52:15.559902 systemd[1]: Stopped target slices.target. Dec 13 03:52:15.560793 systemd[1]: Stopped target sockets.target. Dec 13 03:52:15.561692 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 03:52:15.561725 systemd[1]: Closed iscsid.socket. Dec 13 03:52:15.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.562526 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 03:52:15.562570 systemd[1]: Stopped ignition-setup.service. Dec 13 03:52:15.563485 systemd[1]: Stopping iscsiuio.service... Dec 13 03:52:15.568204 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 03:52:15.569283 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 03:52:15.569886 systemd[1]: Stopped iscsiuio.service. Dec 13 03:52:15.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.571052 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 03:52:15.571661 systemd[1]: Finished initrd-cleanup.service. Dec 13 03:52:15.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.573449 systemd[1]: Stopped target network.target. Dec 13 03:52:15.574362 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 03:52:15.574898 systemd[1]: Closed iscsiuio.socket. Dec 13 03:52:15.575884 systemd[1]: Stopping systemd-networkd.service... Dec 13 03:52:15.576922 systemd[1]: Stopping systemd-resolved.service... Dec 13 03:52:15.579009 systemd-networkd[624]: eth0: DHCPv6 lease lost Dec 13 03:52:15.579915 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 03:52:15.580233 systemd[1]: Stopped systemd-networkd.service. Dec 13 03:52:15.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.581120 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 03:52:15.581153 systemd[1]: Closed systemd-networkd.socket. Dec 13 03:52:15.583000 audit: BPF prog-id=9 op=UNLOAD Dec 13 03:52:15.583169 systemd[1]: Stopping network-cleanup.service... Dec 13 03:52:15.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.584540 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 03:52:15.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.584587 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 03:52:15.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.586393 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 03:52:15.586432 systemd[1]: Stopped systemd-sysctl.service. Dec 13 03:52:15.587681 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 03:52:15.587722 systemd[1]: Stopped systemd-modules-load.service. Dec 13 03:52:15.592415 systemd[1]: Stopping systemd-udevd.service... Dec 13 03:52:15.595036 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 03:52:15.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.595596 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 03:52:15.595694 systemd[1]: Stopped systemd-resolved.service. Dec 13 03:52:15.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.601000 audit: BPF prog-id=6 op=UNLOAD Dec 13 03:52:15.600134 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 03:52:15.600305 systemd[1]: Stopped systemd-udevd.service. Dec 13 03:52:15.601611 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 03:52:15.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.601671 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 03:52:15.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.602199 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 03:52:15.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.602237 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 03:52:15.603186 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 03:52:15.603237 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 03:52:15.604562 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 03:52:15.604616 systemd[1]: Stopped dracut-cmdline.service. Dec 13 03:52:15.605654 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 03:52:15.605693 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 03:52:15.607339 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 03:52:15.614317 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 03:52:15.615041 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 03:52:15.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.616284 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 03:52:15.616327 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 03:52:15.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.617870 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 03:52:15.617909 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 03:52:15.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.620173 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 03:52:15.620747 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 03:52:15.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.620843 systemd[1]: Stopped network-cleanup.service. Dec 13 03:52:15.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.621654 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 03:52:15.621740 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 03:52:15.949020 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 03:52:15.949257 systemd[1]: Stopped sysroot-boot.service. Dec 13 03:52:15.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.951960 systemd[1]: Reached target initrd-switch-root.target. Dec 13 03:52:15.953899 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 03:52:15.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:15.954074 systemd[1]: Stopped initrd-setup-root.service. Dec 13 03:52:15.957735 systemd[1]: Starting initrd-switch-root.service... Dec 13 03:52:16.000833 systemd[1]: Switching root. Dec 13 03:52:16.022688 systemd-journald[185]: Journal stopped Dec 13 03:52:23.640667 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Dec 13 03:52:23.640740 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 03:52:23.640757 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 03:52:23.640770 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 03:52:23.640783 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 03:52:23.640795 kernel: SELinux: policy capability open_perms=1 Dec 13 03:52:23.640807 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 03:52:23.640828 kernel: SELinux: policy capability always_check_network=0 Dec 13 03:52:23.640840 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 03:52:23.640856 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 03:52:23.640869 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 03:52:23.640880 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 03:52:23.640893 systemd[1]: Successfully loaded SELinux policy in 252.533ms. Dec 13 03:52:23.640914 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.425ms. Dec 13 03:52:23.640929 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 03:52:23.640944 systemd[1]: Detected virtualization kvm. Dec 13 03:52:23.640957 systemd[1]: Detected architecture x86-64. Dec 13 03:52:23.640993 systemd[1]: Detected first boot. Dec 13 03:52:23.641011 systemd[1]: Hostname set to . Dec 13 03:52:23.641025 systemd[1]: Initializing machine ID from VM UUID. Dec 13 03:52:23.641037 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 03:52:23.641050 systemd[1]: Populated /etc with preset unit settings. Dec 13 03:52:23.641063 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 03:52:23.641083 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 03:52:23.641098 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 03:52:23.641116 kernel: kauditd_printk_skb: 42 callbacks suppressed Dec 13 03:52:23.641128 kernel: audit: type=1334 audit(1734061943.371:90): prog-id=12 op=LOAD Dec 13 03:52:23.641140 kernel: audit: type=1334 audit(1734061943.371:91): prog-id=3 op=UNLOAD Dec 13 03:52:23.641152 kernel: audit: type=1334 audit(1734061943.373:92): prog-id=13 op=LOAD Dec 13 03:52:23.641164 kernel: audit: type=1334 audit(1734061943.376:93): prog-id=14 op=LOAD Dec 13 03:52:23.641176 kernel: audit: type=1334 audit(1734061943.376:94): prog-id=4 op=UNLOAD Dec 13 03:52:23.641189 kernel: audit: type=1334 audit(1734061943.376:95): prog-id=5 op=UNLOAD Dec 13 03:52:23.641201 kernel: audit: type=1334 audit(1734061943.379:96): prog-id=15 op=LOAD Dec 13 03:52:23.641215 kernel: audit: type=1334 audit(1734061943.379:97): prog-id=12 op=UNLOAD Dec 13 03:52:23.641227 kernel: audit: type=1334 audit(1734061943.382:98): prog-id=16 op=LOAD Dec 13 03:52:23.641238 kernel: audit: type=1334 audit(1734061943.385:99): prog-id=17 op=LOAD Dec 13 03:52:23.641252 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 03:52:23.641264 systemd[1]: Stopped initrd-switch-root.service. Dec 13 03:52:23.641278 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 03:52:23.641290 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 03:52:23.641304 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 03:52:23.641325 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 03:52:23.641350 systemd[1]: Created slice system-getty.slice. Dec 13 03:52:23.641376 systemd[1]: Created slice system-modprobe.slice. Dec 13 03:52:23.641395 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 03:52:23.641410 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 03:52:23.641430 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 03:52:23.641445 systemd[1]: Created slice user.slice. Dec 13 03:52:23.641457 systemd[1]: Started systemd-ask-password-console.path. Dec 13 03:52:23.641469 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 03:52:23.641480 systemd[1]: Set up automount boot.automount. Dec 13 03:52:23.641492 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 03:52:23.641507 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 03:52:23.641518 systemd[1]: Stopped target initrd-fs.target. Dec 13 03:52:23.641532 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 03:52:23.641544 systemd[1]: Reached target integritysetup.target. Dec 13 03:52:23.641556 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 03:52:23.641568 systemd[1]: Reached target remote-fs.target. Dec 13 03:52:23.641580 systemd[1]: Reached target slices.target. Dec 13 03:52:23.641591 systemd[1]: Reached target swap.target. Dec 13 03:52:23.641603 systemd[1]: Reached target torcx.target. Dec 13 03:52:23.641617 systemd[1]: Reached target veritysetup.target. Dec 13 03:52:23.641629 systemd[1]: Listening on systemd-coredump.socket. Dec 13 03:52:23.641641 systemd[1]: Listening on systemd-initctl.socket. Dec 13 03:52:23.641653 systemd[1]: Listening on systemd-networkd.socket. Dec 13 03:52:23.641665 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 03:52:23.641677 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 03:52:23.641689 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 03:52:23.641700 systemd[1]: Mounting dev-hugepages.mount... Dec 13 03:52:23.641712 systemd[1]: Mounting dev-mqueue.mount... Dec 13 03:52:23.641726 systemd[1]: Mounting media.mount... Dec 13 03:52:23.641739 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:52:23.641751 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 03:52:23.641762 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 03:52:23.641774 systemd[1]: Mounting tmp.mount... Dec 13 03:52:23.641787 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 03:52:23.641799 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 03:52:23.641811 systemd[1]: Starting kmod-static-nodes.service... Dec 13 03:52:23.641823 systemd[1]: Starting modprobe@configfs.service... Dec 13 03:52:23.641837 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 03:52:23.641849 systemd[1]: Starting modprobe@drm.service... Dec 13 03:52:23.641861 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 03:52:23.641872 systemd[1]: Starting modprobe@fuse.service... Dec 13 03:52:23.641884 systemd[1]: Starting modprobe@loop.service... Dec 13 03:52:23.641896 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 03:52:23.650074 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 03:52:23.650092 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 03:52:23.650107 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 03:52:23.650130 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 03:52:23.650143 systemd[1]: Stopped systemd-journald.service. Dec 13 03:52:23.650156 systemd[1]: Starting systemd-journald.service... Dec 13 03:52:23.650169 systemd[1]: Starting systemd-modules-load.service... Dec 13 03:52:23.650181 systemd[1]: Starting systemd-network-generator.service... Dec 13 03:52:23.650194 systemd[1]: Starting systemd-remount-fs.service... Dec 13 03:52:23.650207 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 03:52:23.650239 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 03:52:23.650253 systemd[1]: Stopped verity-setup.service. Dec 13 03:52:23.650269 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:52:23.650283 systemd[1]: Mounted dev-hugepages.mount. Dec 13 03:52:23.650295 systemd[1]: Mounted dev-mqueue.mount. Dec 13 03:52:23.650308 systemd[1]: Mounted media.mount. Dec 13 03:52:23.650321 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 03:52:23.650332 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 03:52:23.650344 systemd[1]: Mounted tmp.mount. Dec 13 03:52:23.650355 systemd[1]: Finished kmod-static-nodes.service. Dec 13 03:52:23.650368 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 03:52:23.650382 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 03:52:23.650396 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 03:52:23.650410 systemd[1]: Finished modprobe@drm.service. Dec 13 03:52:23.650423 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 03:52:23.650435 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 03:52:23.650450 systemd[1]: Finished systemd-network-generator.service. Dec 13 03:52:23.650463 systemd[1]: Finished systemd-modules-load.service. Dec 13 03:52:23.650475 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 03:52:23.650488 systemd[1]: Finished modprobe@configfs.service. Dec 13 03:52:23.650500 systemd[1]: Finished systemd-remount-fs.service. Dec 13 03:52:23.650512 systemd[1]: Reached target network-pre.target. Dec 13 03:52:23.650525 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 03:52:23.650540 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 03:52:23.650552 kernel: loop: module loaded Dec 13 03:52:23.650565 kernel: fuse: init (API version 7.34) Dec 13 03:52:23.650579 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 03:52:23.650592 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 03:52:23.650605 systemd[1]: Starting systemd-random-seed.service... Dec 13 03:52:23.650618 systemd[1]: Starting systemd-sysctl.service... Dec 13 03:52:23.650630 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 03:52:23.650647 systemd-journald[913]: Journal started Dec 13 03:52:23.650712 systemd-journald[913]: Runtime Journal (/run/log/journal/fe3dd3fe4f48463f94a8d15466136c68) is 4.9M, max 39.5M, 34.5M free. Dec 13 03:52:16.821000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 03:52:16.964000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 03:52:16.964000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 03:52:16.964000 audit: BPF prog-id=10 op=LOAD Dec 13 03:52:16.964000 audit: BPF prog-id=10 op=UNLOAD Dec 13 03:52:16.964000 audit: BPF prog-id=11 op=LOAD Dec 13 03:52:16.964000 audit: BPF prog-id=11 op=UNLOAD Dec 13 03:52:17.603000 audit[847]: AVC avc: denied { associate } for pid=847 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 03:52:17.603000 audit[847]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=830 pid=847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:52:17.603000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 03:52:17.608000 audit[847]: AVC avc: denied { associate } for pid=847 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 03:52:17.608000 audit[847]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=830 pid=847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:52:17.608000 audit: CWD cwd="/" Dec 13 03:52:17.608000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:17.608000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:17.608000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 03:52:23.371000 audit: BPF prog-id=12 op=LOAD Dec 13 03:52:23.371000 audit: BPF prog-id=3 op=UNLOAD Dec 13 03:52:23.373000 audit: BPF prog-id=13 op=LOAD Dec 13 03:52:23.376000 audit: BPF prog-id=14 op=LOAD Dec 13 03:52:23.376000 audit: BPF prog-id=4 op=UNLOAD Dec 13 03:52:23.376000 audit: BPF prog-id=5 op=UNLOAD Dec 13 03:52:23.379000 audit: BPF prog-id=15 op=LOAD Dec 13 03:52:23.379000 audit: BPF prog-id=12 op=UNLOAD Dec 13 03:52:23.382000 audit: BPF prog-id=16 op=LOAD Dec 13 03:52:23.385000 audit: BPF prog-id=17 op=LOAD Dec 13 03:52:23.385000 audit: BPF prog-id=13 op=UNLOAD Dec 13 03:52:23.385000 audit: BPF prog-id=14 op=UNLOAD Dec 13 03:52:23.388000 audit: BPF prog-id=18 op=LOAD Dec 13 03:52:23.388000 audit: BPF prog-id=15 op=UNLOAD Dec 13 03:52:23.391000 audit: BPF prog-id=19 op=LOAD Dec 13 03:52:23.393000 audit: BPF prog-id=20 op=LOAD Dec 13 03:52:23.393000 audit: BPF prog-id=16 op=UNLOAD Dec 13 03:52:23.393000 audit: BPF prog-id=17 op=UNLOAD Dec 13 03:52:23.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.415000 audit: BPF prog-id=18 op=UNLOAD Dec 13 03:52:23.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.532000 audit: BPF prog-id=21 op=LOAD Dec 13 03:52:23.532000 audit: BPF prog-id=22 op=LOAD Dec 13 03:52:23.533000 audit: BPF prog-id=23 op=LOAD Dec 13 03:52:23.533000 audit: BPF prog-id=19 op=UNLOAD Dec 13 03:52:23.533000 audit: BPF prog-id=20 op=UNLOAD Dec 13 03:52:23.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.632000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 03:52:23.632000 audit[913]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff9507e700 a2=4000 a3=7fff9507e79c items=0 ppid=1 pid=913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:52:23.632000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 03:52:23.369698 systemd[1]: Queued start job for default target multi-user.target. Dec 13 03:52:23.665203 systemd[1]: Finished modprobe@loop.service. Dec 13 03:52:23.665239 systemd[1]: Started systemd-journald.service. Dec 13 03:52:23.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:17.596639 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T03:52:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 03:52:23.369713 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 03:52:17.597799 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T03:52:17Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 03:52:23.401330 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 03:52:17.597876 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T03:52:17Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 03:52:23.656079 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 03:52:17.598017 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T03:52:17Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 03:52:23.656195 systemd[1]: Finished modprobe@fuse.service. Dec 13 03:52:17.598048 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T03:52:17Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 03:52:23.656726 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 03:52:17.598122 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T03:52:17Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 03:52:23.658517 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 03:52:17.598158 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T03:52:17Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 03:52:23.661682 systemd[1]: Starting systemd-journal-flush.service... Dec 13 03:52:17.598626 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T03:52:17Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 03:52:23.662306 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 03:52:17.598727 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T03:52:17Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 03:52:23.663892 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 03:52:17.598764 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T03:52:17Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 03:52:17.601944 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T03:52:17Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 03:52:17.602079 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T03:52:17Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 03:52:17.602131 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T03:52:17Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 03:52:17.602171 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T03:52:17Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 03:52:17.602217 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T03:52:17Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 03:52:17.602255 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T03:52:17Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 03:52:22.463392 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T03:52:22Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 03:52:22.464591 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T03:52:22Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 03:52:22.464910 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T03:52:22Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 03:52:22.465422 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T03:52:22Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 03:52:22.465568 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T03:52:22Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 03:52:22.465733 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T03:52:22Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 03:52:23.682525 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 03:52:23.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.685021 systemd[1]: Starting systemd-sysusers.service... Dec 13 03:52:23.698401 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 03:52:23.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.702153 systemd[1]: Starting systemd-udev-settle.service... Dec 13 03:52:23.720210 udevadm[956]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 03:52:23.734777 systemd[1]: Finished systemd-random-seed.service. Dec 13 03:52:23.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.735441 systemd[1]: Reached target first-boot-complete.target. Dec 13 03:52:23.737241 systemd-journald[913]: Time spent on flushing to /var/log/journal/fe3dd3fe4f48463f94a8d15466136c68 is 17.978ms for 1119 entries. Dec 13 03:52:23.737241 systemd-journald[913]: System Journal (/var/log/journal/fe3dd3fe4f48463f94a8d15466136c68) is 8.0M, max 584.8M, 576.8M free. Dec 13 03:52:23.771331 systemd-journald[913]: Received client request to flush runtime journal. Dec 13 03:52:23.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.747341 systemd[1]: Finished systemd-sysctl.service. Dec 13 03:52:23.772458 systemd[1]: Finished systemd-journal-flush.service. Dec 13 03:52:23.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.792275 systemd[1]: Finished systemd-sysusers.service. Dec 13 03:52:23.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:23.794129 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 03:52:23.840366 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 03:52:23.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:25.681610 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 03:52:25.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:25.683000 audit: BPF prog-id=24 op=LOAD Dec 13 03:52:25.683000 audit: BPF prog-id=25 op=LOAD Dec 13 03:52:25.683000 audit: BPF prog-id=7 op=UNLOAD Dec 13 03:52:25.683000 audit: BPF prog-id=8 op=UNLOAD Dec 13 03:52:25.686326 systemd[1]: Starting systemd-udevd.service... Dec 13 03:52:25.729695 systemd-udevd[961]: Using default interface naming scheme 'v252'. Dec 13 03:52:25.785642 systemd[1]: Started systemd-udevd.service. Dec 13 03:52:25.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:25.791000 audit: BPF prog-id=26 op=LOAD Dec 13 03:52:25.795603 systemd[1]: Starting systemd-networkd.service... Dec 13 03:52:25.817000 audit: BPF prog-id=27 op=LOAD Dec 13 03:52:25.818000 audit: BPF prog-id=28 op=LOAD Dec 13 03:52:25.820000 audit: BPF prog-id=29 op=LOAD Dec 13 03:52:25.823020 systemd[1]: Starting systemd-userdbd.service... Dec 13 03:52:25.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:25.875425 systemd[1]: Started systemd-userdbd.service. Dec 13 03:52:25.887283 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 03:52:25.967475 systemd-networkd[974]: lo: Link UP Dec 13 03:52:25.967493 systemd-networkd[974]: lo: Gained carrier Dec 13 03:52:25.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:25.968606 systemd-networkd[974]: Enumeration completed Dec 13 03:52:25.968705 systemd[1]: Started systemd-networkd.service. Dec 13 03:52:25.969588 systemd-networkd[974]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 03:52:25.971004 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 03:52:25.972984 systemd-networkd[974]: eth0: Link UP Dec 13 03:52:25.972991 systemd-networkd[974]: eth0: Gained carrier Dec 13 03:52:25.983312 systemd-networkd[974]: eth0: DHCPv4 address 172.24.4.199/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 03:52:25.986010 kernel: ACPI: button: Power Button [PWRF] Dec 13 03:52:25.989000 audit[967]: AVC avc: denied { confidentiality } for pid=967 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 03:52:25.989000 audit[967]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5636b3da4690 a1=337fc a2=7feeaf78ebc5 a3=5 items=110 ppid=961 pid=967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:52:25.989000 audit: CWD cwd="/" Dec 13 03:52:25.989000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=1 name=(null) inode=14484 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=2 name=(null) inode=14484 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=3 name=(null) inode=14485 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=4 name=(null) inode=14484 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=5 name=(null) inode=14486 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=6 name=(null) inode=14484 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=7 name=(null) inode=14487 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=8 name=(null) inode=14487 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=9 name=(null) inode=14488 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=10 name=(null) inode=14487 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=11 name=(null) inode=14489 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=12 name=(null) inode=14487 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=13 name=(null) inode=14490 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=14 name=(null) inode=14487 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=15 name=(null) inode=14491 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=16 name=(null) inode=14487 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=17 name=(null) inode=14492 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=18 name=(null) inode=14484 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=19 name=(null) inode=14493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=20 name=(null) inode=14493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=21 name=(null) inode=14494 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=22 name=(null) inode=14493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=23 name=(null) inode=14495 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=24 name=(null) inode=14493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=25 name=(null) inode=14496 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=26 name=(null) inode=14493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=27 name=(null) inode=14497 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=28 name=(null) inode=14493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=29 name=(null) inode=14498 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=30 name=(null) inode=14484 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=31 name=(null) inode=14499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=32 name=(null) inode=14499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=33 name=(null) inode=14500 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=34 name=(null) inode=14499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=35 name=(null) inode=14501 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=36 name=(null) inode=14499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=37 name=(null) inode=14502 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=38 name=(null) inode=14499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=39 name=(null) inode=14503 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=40 name=(null) inode=14499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=41 name=(null) inode=14504 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=42 name=(null) inode=14484 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=43 name=(null) inode=14505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=44 name=(null) inode=14505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=45 name=(null) inode=14506 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=46 name=(null) inode=14505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=47 name=(null) inode=14507 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=48 name=(null) inode=14505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=49 name=(null) inode=14508 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=50 name=(null) inode=14505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=51 name=(null) inode=14509 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=52 name=(null) inode=14505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=53 name=(null) inode=14510 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=55 name=(null) inode=14511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=56 name=(null) inode=14511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=57 name=(null) inode=14512 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=58 name=(null) inode=14511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=59 name=(null) inode=14513 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=60 name=(null) inode=14511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=61 name=(null) inode=14514 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=62 name=(null) inode=14514 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=63 name=(null) inode=14515 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=64 name=(null) inode=14514 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=65 name=(null) inode=14516 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=66 name=(null) inode=14514 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=67 name=(null) inode=14517 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=68 name=(null) inode=14514 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=69 name=(null) inode=14518 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=70 name=(null) inode=14514 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=71 name=(null) inode=14519 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=72 name=(null) inode=14511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=73 name=(null) inode=14520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=74 name=(null) inode=14520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=75 name=(null) inode=14521 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=76 name=(null) inode=14520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=77 name=(null) inode=14522 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=78 name=(null) inode=14520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=79 name=(null) inode=14523 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=80 name=(null) inode=14520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=81 name=(null) inode=14524 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=82 name=(null) inode=14520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=83 name=(null) inode=14525 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=84 name=(null) inode=14511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=85 name=(null) inode=14526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=86 name=(null) inode=14526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=87 name=(null) inode=14527 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=88 name=(null) inode=14526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=89 name=(null) inode=14528 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=90 name=(null) inode=14526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=91 name=(null) inode=14529 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=92 name=(null) inode=14526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=93 name=(null) inode=14530 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=94 name=(null) inode=14526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=95 name=(null) inode=14531 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=96 name=(null) inode=14511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=97 name=(null) inode=14532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=98 name=(null) inode=14532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=99 name=(null) inode=14533 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=100 name=(null) inode=14532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=101 name=(null) inode=14534 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=102 name=(null) inode=14532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=103 name=(null) inode=14535 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=104 name=(null) inode=14532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=105 name=(null) inode=14536 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=106 name=(null) inode=14532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=107 name=(null) inode=14537 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PATH item=109 name=(null) inode=14538 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:52:25.989000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 03:52:26.018524 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 03:52:26.030025 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 03:52:26.036016 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 03:52:26.038002 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 13 03:52:26.084552 systemd[1]: Finished systemd-udev-settle.service. Dec 13 03:52:26.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:26.086808 systemd[1]: Starting lvm2-activation-early.service... Dec 13 03:52:26.132621 lvm[990]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 03:52:26.175105 systemd[1]: Finished lvm2-activation-early.service. Dec 13 03:52:26.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:26.176602 systemd[1]: Reached target cryptsetup.target. Dec 13 03:52:26.180578 systemd[1]: Starting lvm2-activation.service... Dec 13 03:52:26.190668 lvm[991]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 03:52:26.229460 systemd[1]: Finished lvm2-activation.service. Dec 13 03:52:26.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:26.230934 systemd[1]: Reached target local-fs-pre.target. Dec 13 03:52:26.232285 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 03:52:26.232350 systemd[1]: Reached target local-fs.target. Dec 13 03:52:26.233440 systemd[1]: Reached target machines.target. Dec 13 03:52:26.237428 systemd[1]: Starting ldconfig.service... Dec 13 03:52:26.239943 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 03:52:26.240090 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:52:26.242565 systemd[1]: Starting systemd-boot-update.service... Dec 13 03:52:26.247860 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 03:52:26.255773 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 03:52:26.259774 systemd[1]: Starting systemd-sysext.service... Dec 13 03:52:26.275532 systemd[1]: boot.automount: Got automount request for /boot, triggered by 993 (bootctl) Dec 13 03:52:26.278741 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 03:52:26.296875 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 03:52:26.417301 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 03:52:26.418034 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 03:52:26.482625 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 03:52:26.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:26.530186 kernel: loop0: detected capacity change from 0 to 210664 Dec 13 03:52:26.781606 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 03:52:26.783149 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 03:52:26.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:26.840504 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 03:52:26.876035 kernel: loop1: detected capacity change from 0 to 210664 Dec 13 03:52:26.935471 (sd-sysext)[1006]: Using extensions 'kubernetes'. Dec 13 03:52:26.938381 (sd-sysext)[1006]: Merged extensions into '/usr'. Dec 13 03:52:26.974454 systemd-fsck[1003]: fsck.fat 4.2 (2021-01-31) Dec 13 03:52:26.974454 systemd-fsck[1003]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 03:52:26.994381 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 03:52:26.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.001031 systemd[1]: Mounting boot.mount... Dec 13 03:52:27.001622 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:52:27.003193 systemd[1]: Mounting usr-share-oem.mount... Dec 13 03:52:27.005357 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 03:52:27.006904 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 03:52:27.008566 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 03:52:27.010126 systemd[1]: Starting modprobe@loop.service... Dec 13 03:52:27.010652 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 03:52:27.010785 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:52:27.010935 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:52:27.012423 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 03:52:27.013661 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 03:52:27.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.017313 systemd[1]: Mounted usr-share-oem.mount. Dec 13 03:52:27.018442 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 03:52:27.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.018564 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 03:52:27.019493 systemd[1]: Finished systemd-sysext.service. Dec 13 03:52:27.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.020576 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 03:52:27.020702 systemd[1]: Finished modprobe@loop.service. Dec 13 03:52:27.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.024531 systemd[1]: Starting ensure-sysext.service... Dec 13 03:52:27.025162 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 03:52:27.025209 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 03:52:27.026584 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 03:52:27.041548 systemd[1]: Mounted boot.mount. Dec 13 03:52:27.044053 systemd[1]: Reloading. Dec 13 03:52:27.060772 systemd-tmpfiles[1014]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 03:52:27.066203 systemd-tmpfiles[1014]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 03:52:27.070943 systemd-tmpfiles[1014]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 03:52:27.138917 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2024-12-13T03:52:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 03:52:27.138954 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2024-12-13T03:52:27Z" level=info msg="torcx already run" Dec 13 03:52:27.288382 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 03:52:27.288593 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 03:52:27.318169 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 03:52:27.382249 systemd-networkd[974]: eth0: Gained IPv6LL Dec 13 03:52:27.393000 audit: BPF prog-id=30 op=LOAD Dec 13 03:52:27.394000 audit: BPF prog-id=26 op=UNLOAD Dec 13 03:52:27.395000 audit: BPF prog-id=31 op=LOAD Dec 13 03:52:27.396000 audit: BPF prog-id=27 op=UNLOAD Dec 13 03:52:27.396000 audit: BPF prog-id=32 op=LOAD Dec 13 03:52:27.397000 audit: BPF prog-id=33 op=LOAD Dec 13 03:52:27.397000 audit: BPF prog-id=28 op=UNLOAD Dec 13 03:52:27.397000 audit: BPF prog-id=29 op=UNLOAD Dec 13 03:52:27.399000 audit: BPF prog-id=34 op=LOAD Dec 13 03:52:27.399000 audit: BPF prog-id=35 op=LOAD Dec 13 03:52:27.399000 audit: BPF prog-id=24 op=UNLOAD Dec 13 03:52:27.399000 audit: BPF prog-id=25 op=UNLOAD Dec 13 03:52:27.403000 audit: BPF prog-id=36 op=LOAD Dec 13 03:52:27.403000 audit: BPF prog-id=21 op=UNLOAD Dec 13 03:52:27.404000 audit: BPF prog-id=37 op=LOAD Dec 13 03:52:27.404000 audit: BPF prog-id=38 op=LOAD Dec 13 03:52:27.404000 audit: BPF prog-id=22 op=UNLOAD Dec 13 03:52:27.404000 audit: BPF prog-id=23 op=UNLOAD Dec 13 03:52:27.411614 systemd[1]: Finished systemd-boot-update.service. Dec 13 03:52:27.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.414286 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 03:52:27.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.419319 systemd[1]: Starting audit-rules.service... Dec 13 03:52:27.421297 systemd[1]: Starting clean-ca-certificates.service... Dec 13 03:52:27.423244 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 03:52:27.429000 audit: BPF prog-id=39 op=LOAD Dec 13 03:52:27.431633 systemd[1]: Starting systemd-resolved.service... Dec 13 03:52:27.434000 audit: BPF prog-id=40 op=LOAD Dec 13 03:52:27.437471 systemd[1]: Starting systemd-timesyncd.service... Dec 13 03:52:27.440064 systemd[1]: Starting systemd-update-utmp.service... Dec 13 03:52:27.452546 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:52:27.452768 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 03:52:27.454789 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 03:52:27.456884 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 03:52:27.460553 systemd[1]: Starting modprobe@loop.service... Dec 13 03:52:27.461940 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 03:52:27.462117 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:52:27.462261 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:52:27.465547 systemd[1]: Finished clean-ca-certificates.service. Dec 13 03:52:27.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.466773 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 03:52:27.466904 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 03:52:27.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.467771 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 03:52:27.467883 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 03:52:27.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.469109 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 03:52:27.469239 systemd[1]: Finished modprobe@loop.service. Dec 13 03:52:27.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.470262 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 03:52:27.470400 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 03:52:27.470488 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 03:52:27.473235 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:52:27.473455 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 03:52:27.475901 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 03:52:27.478363 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 03:52:27.480820 systemd[1]: Starting modprobe@loop.service... Dec 13 03:52:27.482181 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 03:52:27.482330 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:52:27.482457 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 03:52:27.482548 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:52:27.483754 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 03:52:27.485046 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 03:52:27.485676 ldconfig[992]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 03:52:27.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.491535 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:52:27.491858 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 03:52:27.493487 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 03:52:27.495406 systemd[1]: Starting modprobe@drm.service... Dec 13 03:52:27.496079 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 03:52:27.496287 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:52:27.498111 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 03:52:27.498960 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 03:52:27.499116 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:52:27.501005 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 03:52:27.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.503651 systemd[1]: Finished ldconfig.service. Dec 13 03:52:27.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.505433 systemd[1]: Finished ensure-sysext.service. Dec 13 03:52:27.507783 systemd[1]: Starting systemd-update-done.service... Dec 13 03:52:27.509732 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 03:52:27.509893 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 03:52:27.509000 audit[1089]: SYSTEM_BOOT pid=1089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.518846 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 03:52:27.519010 systemd[1]: Finished modprobe@drm.service. Dec 13 03:52:27.519719 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 03:52:27.520429 systemd[1]: Finished systemd-update-done.service. Dec 13 03:52:27.521080 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 03:52:27.521202 systemd[1]: Finished modprobe@loop.service. Dec 13 03:52:27.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.524016 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 03:52:27.527926 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 03:52:27.528132 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 03:52:27.528787 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 03:52:27.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.529395 systemd[1]: Finished systemd-update-utmp.service. Dec 13 03:52:27.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:52:27.550000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 03:52:27.550000 audit[1111]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffde193000 a2=420 a3=0 items=0 ppid=1081 pid=1111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:52:27.550000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 03:52:27.552037 augenrules[1111]: No rules Dec 13 03:52:27.552065 systemd[1]: Finished audit-rules.service. Dec 13 03:52:27.576814 systemd-resolved[1085]: Positive Trust Anchors: Dec 13 03:52:27.577243 systemd-resolved[1085]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 03:52:27.577354 systemd-resolved[1085]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 03:52:27.580033 systemd[1]: Started systemd-timesyncd.service. Dec 13 03:52:27.580613 systemd[1]: Reached target time-set.target. Dec 13 03:52:27.586237 systemd-resolved[1085]: Using system hostname 'ci-3510-3-6-5-153fa2e4c7.novalocal'. Dec 13 03:52:27.588010 systemd[1]: Started systemd-resolved.service. Dec 13 03:52:27.588540 systemd[1]: Reached target network.target. Dec 13 03:52:27.588961 systemd[1]: Reached target network-online.target. Dec 13 03:52:27.589395 systemd[1]: Reached target nss-lookup.target. Dec 13 03:52:27.589810 systemd[1]: Reached target sysinit.target. Dec 13 03:52:27.590322 systemd[1]: Started motdgen.path. Dec 13 03:52:27.590747 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 03:52:27.591521 systemd[1]: Started logrotate.timer. Dec 13 03:52:27.592025 systemd[1]: Started mdadm.timer. Dec 13 03:52:27.592408 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 03:52:27.592833 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 03:52:27.592861 systemd[1]: Reached target paths.target. Dec 13 03:52:27.593272 systemd[1]: Reached target timers.target. Dec 13 03:52:27.594075 systemd[1]: Listening on dbus.socket. Dec 13 03:52:27.595575 systemd[1]: Starting docker.socket... Dec 13 03:52:27.599238 systemd[1]: Listening on sshd.socket. Dec 13 03:52:27.599853 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:52:27.600350 systemd[1]: Listening on docker.socket. Dec 13 03:52:27.601013 systemd[1]: Reached target sockets.target. Dec 13 03:52:27.601547 systemd[1]: Reached target basic.target. Dec 13 03:52:27.602132 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 03:52:27.602159 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 03:52:27.602717 systemd-timesyncd[1086]: Contacted time server 51.255.95.80:123 (0.flatcar.pool.ntp.org). Dec 13 03:52:27.603015 systemd-timesyncd[1086]: Initial clock synchronization to Fri 2024-12-13 03:52:27.491985 UTC. Dec 13 03:52:27.603923 systemd[1]: Starting containerd.service... Dec 13 03:52:27.605622 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 03:52:27.607143 systemd[1]: Starting dbus.service... Dec 13 03:52:27.609569 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 03:52:27.611214 systemd[1]: Starting extend-filesystems.service... Dec 13 03:52:27.612607 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 03:52:27.617027 systemd[1]: Starting kubelet.service... Dec 13 03:52:27.618451 systemd[1]: Starting motdgen.service... Dec 13 03:52:27.620192 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 03:52:27.621615 systemd[1]: Starting sshd-keygen.service... Dec 13 03:52:27.623346 jq[1125]: false Dec 13 03:52:27.627827 systemd[1]: Starting systemd-logind.service... Dec 13 03:52:27.628346 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:52:27.628410 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 03:52:27.628914 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 03:52:27.632111 systemd[1]: Starting update-engine.service... Dec 13 03:52:27.635583 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 03:52:27.644104 jq[1135]: true Dec 13 03:52:27.644845 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 03:52:27.645061 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 03:52:27.652259 systemd[1]: Created slice system-sshd.slice. Dec 13 03:52:27.664394 extend-filesystems[1126]: Found loop1 Dec 13 03:52:27.668030 extend-filesystems[1126]: Found vda Dec 13 03:52:27.669088 extend-filesystems[1126]: Found vda1 Dec 13 03:52:27.672755 extend-filesystems[1126]: Found vda2 Dec 13 03:52:27.673423 extend-filesystems[1126]: Found vda3 Dec 13 03:52:27.674294 jq[1141]: true Dec 13 03:52:27.675201 extend-filesystems[1126]: Found usr Dec 13 03:52:27.675201 extend-filesystems[1126]: Found vda4 Dec 13 03:52:27.675201 extend-filesystems[1126]: Found vda6 Dec 13 03:52:27.675201 extend-filesystems[1126]: Found vda7 Dec 13 03:52:27.675201 extend-filesystems[1126]: Found vda9 Dec 13 03:52:27.675201 extend-filesystems[1126]: Checking size of /dev/vda9 Dec 13 03:52:27.679618 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 03:52:27.679796 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 03:52:27.696089 dbus-daemon[1122]: [system] SELinux support is enabled Dec 13 03:52:27.696351 systemd[1]: Started dbus.service. Dec 13 03:52:27.700206 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 03:52:27.700249 systemd[1]: Reached target system-config.target. Dec 13 03:52:27.700864 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 03:52:27.700891 systemd[1]: Reached target user-config.target. Dec 13 03:52:27.703638 extend-filesystems[1126]: Resized partition /dev/vda9 Dec 13 03:52:27.717849 extend-filesystems[1162]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 03:52:27.759532 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 03:52:27.763224 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Dec 13 03:52:27.759726 systemd[1]: Finished motdgen.service. Dec 13 03:52:27.861323 update_engine[1132]: I1213 03:52:27.839795 1132 main.cc:92] Flatcar Update Engine starting Dec 13 03:52:27.868010 env[1143]: time="2024-12-13T03:52:27.867912865Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 03:52:27.868470 systemd-logind[1131]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 03:52:27.868716 systemd-logind[1131]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 03:52:27.870927 systemd-logind[1131]: New seat seat0. Dec 13 03:52:27.879825 systemd[1]: Started systemd-logind.service. Dec 13 03:52:27.888959 systemd[1]: Started update-engine.service. Dec 13 03:52:27.892319 systemd[1]: Started locksmithd.service. Dec 13 03:52:27.896040 update_engine[1132]: I1213 03:52:27.893504 1132 update_check_scheduler.cc:74] Next update check in 6m25s Dec 13 03:52:27.906260 env[1143]: time="2024-12-13T03:52:27.906199462Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 03:52:27.916349 env[1143]: time="2024-12-13T03:52:27.915258522Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 03:52:27.917146 env[1143]: time="2024-12-13T03:52:27.916951627Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 03:52:27.918805 env[1143]: time="2024-12-13T03:52:27.918751854Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 03:52:27.926026 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Dec 13 03:52:27.926179 env[1143]: time="2024-12-13T03:52:27.926154908Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 03:52:28.025984 env[1143]: time="2024-12-13T03:52:27.926227234Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 03:52:28.025984 env[1143]: time="2024-12-13T03:52:27.926256098Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 03:52:28.025984 env[1143]: time="2024-12-13T03:52:27.926270635Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 03:52:27.927835 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 03:52:28.026501 bash[1175]: Updated "/home/core/.ssh/authorized_keys" Dec 13 03:52:28.036546 env[1143]: time="2024-12-13T03:52:28.027356860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 03:52:28.036546 env[1143]: time="2024-12-13T03:52:28.028601758Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 03:52:28.036546 env[1143]: time="2024-12-13T03:52:28.029688395Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 03:52:28.030551 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 03:52:28.036931 extend-filesystems[1162]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 03:52:28.036931 extend-filesystems[1162]: old_desc_blocks = 1, new_desc_blocks = 3 Dec 13 03:52:28.036931 extend-filesystems[1162]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Dec 13 03:52:28.030982 systemd[1]: Finished extend-filesystems.service. Dec 13 03:52:28.043725 env[1143]: time="2024-12-13T03:52:28.038303381Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 03:52:28.043725 env[1143]: time="2024-12-13T03:52:28.038595597Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 03:52:28.043725 env[1143]: time="2024-12-13T03:52:28.038704735Z" level=info msg="metadata content store policy set" policy=shared Dec 13 03:52:28.043920 extend-filesystems[1126]: Resized filesystem in /dev/vda9 Dec 13 03:52:28.116889 env[1143]: time="2024-12-13T03:52:28.114937010Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 03:52:28.116889 env[1143]: time="2024-12-13T03:52:28.115161133Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 03:52:28.116889 env[1143]: time="2024-12-13T03:52:28.115240066Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 03:52:28.116889 env[1143]: time="2024-12-13T03:52:28.115358569Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 03:52:28.116889 env[1143]: time="2024-12-13T03:52:28.115455613Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 03:52:28.116889 env[1143]: time="2024-12-13T03:52:28.115541412Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 03:52:28.116889 env[1143]: time="2024-12-13T03:52:28.115618231Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 03:52:28.116889 env[1143]: time="2024-12-13T03:52:28.115659393Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 03:52:28.116889 env[1143]: time="2024-12-13T03:52:28.115734373Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 03:52:28.116889 env[1143]: time="2024-12-13T03:52:28.115806222Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 03:52:28.116889 env[1143]: time="2024-12-13T03:52:28.115845970Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 03:52:28.116889 env[1143]: time="2024-12-13T03:52:28.115919637Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 03:52:28.116889 env[1143]: time="2024-12-13T03:52:28.116425099Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 03:52:28.118857 env[1143]: time="2024-12-13T03:52:28.116755355Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 03:52:28.119084 env[1143]: time="2024-12-13T03:52:28.119037537Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 03:52:28.119352 env[1143]: time="2024-12-13T03:52:28.119273773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 03:52:28.119589 env[1143]: time="2024-12-13T03:52:28.119522359Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 03:52:28.119948 env[1143]: time="2024-12-13T03:52:28.119878511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 03:52:28.120254 env[1143]: time="2024-12-13T03:52:28.120173831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 03:52:28.120511 env[1143]: time="2024-12-13T03:52:28.120438572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 03:52:28.123898 env[1143]: time="2024-12-13T03:52:28.123814750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 03:52:28.124186 env[1143]: time="2024-12-13T03:52:28.124116137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 03:52:28.124423 env[1143]: time="2024-12-13T03:52:28.124355623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 03:52:28.124635 env[1143]: time="2024-12-13T03:52:28.124569906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 03:52:28.124872 env[1143]: time="2024-12-13T03:52:28.124798009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 03:52:28.125264 env[1143]: time="2024-12-13T03:52:28.125112655Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 03:52:28.125874 env[1143]: time="2024-12-13T03:52:28.125828565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 03:52:28.126138 env[1143]: time="2024-12-13T03:52:28.126100084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 03:52:28.126329 env[1143]: time="2024-12-13T03:52:28.126290862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 03:52:28.126538 env[1143]: time="2024-12-13T03:52:28.126500124Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 03:52:28.126747 env[1143]: time="2024-12-13T03:52:28.126700100Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 03:52:28.126901 env[1143]: time="2024-12-13T03:52:28.126865346Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 03:52:28.127109 env[1143]: time="2024-12-13T03:52:28.127068839Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 03:52:28.127367 env[1143]: time="2024-12-13T03:52:28.127328956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 03:52:28.128185 env[1143]: time="2024-12-13T03:52:28.128029048Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 03:52:28.247477 env[1143]: time="2024-12-13T03:52:28.128515560Z" level=info msg="Connect containerd service" Dec 13 03:52:28.247477 env[1143]: time="2024-12-13T03:52:28.128646216Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 03:52:28.247477 env[1143]: time="2024-12-13T03:52:28.130139542Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 03:52:28.247477 env[1143]: time="2024-12-13T03:52:28.130617606Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 03:52:28.247477 env[1143]: time="2024-12-13T03:52:28.130710746Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 03:52:28.247477 env[1143]: time="2024-12-13T03:52:28.132875779Z" level=info msg="containerd successfully booted in 0.309523s" Dec 13 03:52:28.247477 env[1143]: time="2024-12-13T03:52:28.135382558Z" level=info msg="Start subscribing containerd event" Dec 13 03:52:28.247477 env[1143]: time="2024-12-13T03:52:28.135497711Z" level=info msg="Start recovering state" Dec 13 03:52:28.247477 env[1143]: time="2024-12-13T03:52:28.135593362Z" level=info msg="Start event monitor" Dec 13 03:52:28.247477 env[1143]: time="2024-12-13T03:52:28.135615919Z" level=info msg="Start snapshots syncer" Dec 13 03:52:28.247477 env[1143]: time="2024-12-13T03:52:28.135628743Z" level=info msg="Start cni network conf syncer for default" Dec 13 03:52:28.247477 env[1143]: time="2024-12-13T03:52:28.135643395Z" level=info msg="Start streaming server" Dec 13 03:52:28.130995 systemd[1]: Started containerd.service. Dec 13 03:52:28.589263 locksmithd[1179]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 03:52:29.495233 sshd_keygen[1144]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 03:52:29.537708 systemd[1]: Finished sshd-keygen.service. Dec 13 03:52:29.539927 systemd[1]: Starting issuegen.service... Dec 13 03:52:29.541567 systemd[1]: Started sshd@0-172.24.4.199:22-172.24.4.1:50954.service. Dec 13 03:52:29.547312 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 03:52:29.547483 systemd[1]: Finished issuegen.service. Dec 13 03:52:29.549436 systemd[1]: Starting systemd-user-sessions.service... Dec 13 03:52:29.559278 systemd[1]: Finished systemd-user-sessions.service. Dec 13 03:52:29.561366 systemd[1]: Started getty@tty1.service. Dec 13 03:52:29.563157 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 03:52:29.563841 systemd[1]: Reached target getty.target. Dec 13 03:52:30.110590 systemd[1]: Started kubelet.service. Dec 13 03:52:30.749199 sshd[1197]: Accepted publickey for core from 172.24.4.1 port 50954 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:52:30.751757 sshd[1197]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:52:30.791057 systemd[1]: Created slice user-500.slice. Dec 13 03:52:30.792041 systemd-logind[1131]: New session 1 of user core. Dec 13 03:52:30.793450 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 03:52:30.805949 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 03:52:30.808233 systemd[1]: Starting user@500.service... Dec 13 03:52:30.814122 (systemd)[1214]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:52:30.919979 systemd[1214]: Queued start job for default target default.target. Dec 13 03:52:30.920870 systemd[1214]: Reached target paths.target. Dec 13 03:52:30.920896 systemd[1214]: Reached target sockets.target. Dec 13 03:52:30.920910 systemd[1214]: Reached target timers.target. Dec 13 03:52:30.920924 systemd[1214]: Reached target basic.target. Dec 13 03:52:30.921051 systemd[1]: Started user@500.service. Dec 13 03:52:30.922428 systemd[1]: Started session-1.scope. Dec 13 03:52:30.926874 systemd[1214]: Reached target default.target. Dec 13 03:52:30.926926 systemd[1214]: Startup finished in 105ms. Dec 13 03:52:31.500119 systemd[1]: Started sshd@1-172.24.4.199:22-172.24.4.1:50968.service. Dec 13 03:52:32.385128 kubelet[1206]: E1213 03:52:32.384949 1206 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 03:52:32.389844 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 03:52:32.390251 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 03:52:32.390832 systemd[1]: kubelet.service: Consumed 2.176s CPU time. Dec 13 03:52:33.268246 sshd[1224]: Accepted publickey for core from 172.24.4.1 port 50968 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:52:33.597934 sshd[1224]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:52:33.608836 systemd-logind[1131]: New session 2 of user core. Dec 13 03:52:33.609876 systemd[1]: Started session-2.scope. Dec 13 03:52:34.101789 sshd[1224]: pam_unix(sshd:session): session closed for user core Dec 13 03:52:34.110440 systemd[1]: sshd@1-172.24.4.199:22-172.24.4.1:50968.service: Deactivated successfully. Dec 13 03:52:34.111867 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 03:52:34.113457 systemd-logind[1131]: Session 2 logged out. Waiting for processes to exit. Dec 13 03:52:34.116232 systemd[1]: Started sshd@2-172.24.4.199:22-172.24.4.1:50982.service. Dec 13 03:52:34.120658 systemd-logind[1131]: Removed session 2. Dec 13 03:52:34.573252 systemd[1]: serial-getty@ttyS0.service: Deactivated successfully. Dec 13 03:52:34.711428 systemd[1]: serial-getty@ttyS0.service: Scheduled restart job, restart counter is at 1. Dec 13 03:52:34.711884 systemd[1]: Stopped serial-getty@ttyS0.service. Dec 13 03:52:34.714804 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 03:52:34.726440 coreos-metadata[1121]: Dec 13 03:52:34.726 WARN failed to locate config-drive, using the metadata service API instead Dec 13 03:52:34.810385 coreos-metadata[1121]: Dec 13 03:52:34.810 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 03:52:35.018870 coreos-metadata[1121]: Dec 13 03:52:35.018 INFO Fetch successful Dec 13 03:52:35.018870 coreos-metadata[1121]: Dec 13 03:52:35.018 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 03:52:35.033177 coreos-metadata[1121]: Dec 13 03:52:35.033 INFO Fetch successful Dec 13 03:52:35.042599 unknown[1121]: wrote ssh authorized keys file for user: core Dec 13 03:52:35.109323 update-ssh-keys[1235]: Updated "/home/core/.ssh/authorized_keys" Dec 13 03:52:35.111260 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 03:52:35.112305 systemd[1]: Reached target multi-user.target. Dec 13 03:52:35.115682 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 03:52:35.133778 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 03:52:35.134189 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 03:52:35.136435 systemd[1]: Startup finished in 1.003s (kernel) + 10.542s (initrd) + 18.807s (userspace) = 30.354s. Dec 13 03:52:35.609336 sshd[1230]: Accepted publickey for core from 172.24.4.1 port 50982 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:52:35.612187 sshd[1230]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:52:35.623459 systemd-logind[1131]: New session 3 of user core. Dec 13 03:52:35.624931 systemd[1]: Started session-3.scope. Dec 13 03:52:36.102110 sshd[1230]: pam_unix(sshd:session): session closed for user core Dec 13 03:52:36.108157 systemd[1]: sshd@2-172.24.4.199:22-172.24.4.1:50982.service: Deactivated successfully. Dec 13 03:52:36.109763 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 03:52:36.111365 systemd-logind[1131]: Session 3 logged out. Waiting for processes to exit. Dec 13 03:52:36.113720 systemd-logind[1131]: Removed session 3. Dec 13 03:52:42.461353 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 03:52:42.462582 systemd[1]: Stopped kubelet.service. Dec 13 03:52:42.462701 systemd[1]: kubelet.service: Consumed 2.176s CPU time. Dec 13 03:52:42.465570 systemd[1]: Starting kubelet.service... Dec 13 03:52:42.594777 systemd[1]: Started kubelet.service. Dec 13 03:52:43.158177 kubelet[1244]: E1213 03:52:43.158079 1244 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 03:52:43.165465 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 03:52:43.165779 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 03:52:46.084931 systemd[1]: Started sshd@3-172.24.4.199:22-172.24.4.1:50848.service. Dec 13 03:52:47.542677 sshd[1252]: Accepted publickey for core from 172.24.4.1 port 50848 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:52:47.545376 sshd[1252]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:52:47.556798 systemd-logind[1131]: New session 4 of user core. Dec 13 03:52:47.558272 systemd[1]: Started session-4.scope. Dec 13 03:52:48.326322 sshd[1252]: pam_unix(sshd:session): session closed for user core Dec 13 03:52:48.334214 systemd[1]: sshd@3-172.24.4.199:22-172.24.4.1:50848.service: Deactivated successfully. Dec 13 03:52:48.335617 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 03:52:48.336920 systemd-logind[1131]: Session 4 logged out. Waiting for processes to exit. Dec 13 03:52:48.340752 systemd[1]: Started sshd@4-172.24.4.199:22-172.24.4.1:50850.service. Dec 13 03:52:48.344686 systemd-logind[1131]: Removed session 4. Dec 13 03:52:49.859450 sshd[1258]: Accepted publickey for core from 172.24.4.1 port 50850 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:52:49.862080 sshd[1258]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:52:49.872394 systemd-logind[1131]: New session 5 of user core. Dec 13 03:52:49.875102 systemd[1]: Started session-5.scope. Dec 13 03:52:50.644900 sshd[1258]: pam_unix(sshd:session): session closed for user core Dec 13 03:52:50.651344 systemd[1]: Started sshd@5-172.24.4.199:22-172.24.4.1:50866.service. Dec 13 03:52:50.657275 systemd[1]: sshd@4-172.24.4.199:22-172.24.4.1:50850.service: Deactivated successfully. Dec 13 03:52:50.658925 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 03:52:50.662802 systemd-logind[1131]: Session 5 logged out. Waiting for processes to exit. Dec 13 03:52:50.665685 systemd-logind[1131]: Removed session 5. Dec 13 03:52:51.882302 sshd[1263]: Accepted publickey for core from 172.24.4.1 port 50866 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:52:51.886632 sshd[1263]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:52:51.899873 systemd[1]: Started session-6.scope. Dec 13 03:52:51.901851 systemd-logind[1131]: New session 6 of user core. Dec 13 03:52:52.667809 sshd[1263]: pam_unix(sshd:session): session closed for user core Dec 13 03:52:52.674178 systemd[1]: Started sshd@6-172.24.4.199:22-172.24.4.1:50874.service. Dec 13 03:52:52.678856 systemd[1]: sshd@5-172.24.4.199:22-172.24.4.1:50866.service: Deactivated successfully. Dec 13 03:52:52.680517 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 03:52:52.683415 systemd-logind[1131]: Session 6 logged out. Waiting for processes to exit. Dec 13 03:52:52.685932 systemd-logind[1131]: Removed session 6. Dec 13 03:52:53.211312 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 03:52:53.211748 systemd[1]: Stopped kubelet.service. Dec 13 03:52:53.214361 systemd[1]: Starting kubelet.service... Dec 13 03:52:53.323618 systemd[1]: Started kubelet.service. Dec 13 03:52:53.911493 kubelet[1276]: E1213 03:52:53.911420 1276 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 03:52:53.915670 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 03:52:53.916129 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 03:52:54.088385 sshd[1269]: Accepted publickey for core from 172.24.4.1 port 50874 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 03:52:54.092078 sshd[1269]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:52:54.103198 systemd-logind[1131]: New session 7 of user core. Dec 13 03:52:54.105221 systemd[1]: Started session-7.scope. Dec 13 03:52:54.548399 sudo[1283]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 03:52:54.548884 sudo[1283]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 03:52:54.578477 systemd[1]: Starting coreos-metadata.service... Dec 13 03:53:01.641314 coreos-metadata[1287]: Dec 13 03:53:01.641 WARN failed to locate config-drive, using the metadata service API instead Dec 13 03:53:01.704814 coreos-metadata[1287]: Dec 13 03:53:01.704 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 03:53:01.873266 coreos-metadata[1287]: Dec 13 03:53:01.873 INFO Fetch successful Dec 13 03:53:01.873266 coreos-metadata[1287]: Dec 13 03:53:01.873 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 03:53:01.890280 coreos-metadata[1287]: Dec 13 03:53:01.890 INFO Fetch successful Dec 13 03:53:01.890280 coreos-metadata[1287]: Dec 13 03:53:01.890 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 03:53:01.906357 coreos-metadata[1287]: Dec 13 03:53:01.906 INFO Fetch successful Dec 13 03:53:01.906357 coreos-metadata[1287]: Dec 13 03:53:01.906 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 03:53:01.923853 coreos-metadata[1287]: Dec 13 03:53:01.923 INFO Fetch successful Dec 13 03:53:01.923853 coreos-metadata[1287]: Dec 13 03:53:01.923 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 03:53:01.940153 coreos-metadata[1287]: Dec 13 03:53:01.940 INFO Fetch successful Dec 13 03:53:01.957095 systemd[1]: Finished coreos-metadata.service. Dec 13 03:53:03.961579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 03:53:03.962113 systemd[1]: Stopped kubelet.service. Dec 13 03:53:03.967220 systemd[1]: Starting kubelet.service... Dec 13 03:53:04.578139 systemd[1]: Started kubelet.service. Dec 13 03:53:04.702581 kubelet[1327]: E1213 03:53:04.702520 1327 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 03:53:04.707445 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 03:53:04.707572 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 03:53:04.724460 systemd[1]: Stopped kubelet.service. Dec 13 03:53:04.726624 systemd[1]: Starting kubelet.service... Dec 13 03:53:04.756529 systemd[1]: Reloading. Dec 13 03:53:04.878204 /usr/lib/systemd/system-generators/torcx-generator[1359]: time="2024-12-13T03:53:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 03:53:04.878238 /usr/lib/systemd/system-generators/torcx-generator[1359]: time="2024-12-13T03:53:04Z" level=info msg="torcx already run" Dec 13 03:53:05.413858 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 03:53:05.413894 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 03:53:05.437119 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 03:53:05.544010 systemd[1]: Started kubelet.service. Dec 13 03:53:05.560312 systemd[1]: Stopping kubelet.service... Dec 13 03:53:05.563933 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 03:53:05.564377 systemd[1]: Stopped kubelet.service. Dec 13 03:53:05.567887 systemd[1]: Starting kubelet.service... Dec 13 03:53:05.653883 systemd[1]: Started kubelet.service. Dec 13 03:53:05.706007 kubelet[1417]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 03:53:05.706338 kubelet[1417]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 03:53:05.706402 kubelet[1417]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 03:53:05.914288 kubelet[1417]: I1213 03:53:05.914087 1417 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 03:53:06.601609 kubelet[1417]: I1213 03:53:06.601576 1417 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 03:53:06.601762 kubelet[1417]: I1213 03:53:06.601751 1417 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 03:53:06.602095 kubelet[1417]: I1213 03:53:06.602075 1417 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 03:53:06.716507 kubelet[1417]: I1213 03:53:06.716452 1417 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 03:53:06.754895 kubelet[1417]: I1213 03:53:06.754808 1417 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 03:53:06.761222 kubelet[1417]: I1213 03:53:06.761125 1417 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 03:53:06.761631 kubelet[1417]: I1213 03:53:06.761207 1417 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.24.4.199","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 03:53:06.761901 kubelet[1417]: I1213 03:53:06.761644 1417 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 03:53:06.761901 kubelet[1417]: I1213 03:53:06.761672 1417 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 03:53:06.761901 kubelet[1417]: I1213 03:53:06.761882 1417 state_mem.go:36] "Initialized new in-memory state store" Dec 13 03:53:06.765304 kubelet[1417]: I1213 03:53:06.765251 1417 kubelet.go:400] "Attempting to sync node with API server" Dec 13 03:53:06.765304 kubelet[1417]: I1213 03:53:06.765304 1417 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 03:53:06.765608 kubelet[1417]: I1213 03:53:06.765355 1417 kubelet.go:312] "Adding apiserver pod source" Dec 13 03:53:06.765608 kubelet[1417]: I1213 03:53:06.765386 1417 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 03:53:06.766404 kubelet[1417]: E1213 03:53:06.766345 1417 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:06.766513 kubelet[1417]: E1213 03:53:06.766465 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:06.775685 kubelet[1417]: I1213 03:53:06.775643 1417 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 03:53:06.780501 kubelet[1417]: I1213 03:53:06.780440 1417 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 03:53:06.781160 kubelet[1417]: W1213 03:53:06.780923 1417 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 03:53:06.783273 kubelet[1417]: W1213 03:53:06.783220 1417 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 03:53:06.783559 kubelet[1417]: E1213 03:53:06.783286 1417 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 03:53:06.783859 kubelet[1417]: I1213 03:53:06.783829 1417 server.go:1264] "Started kubelet" Dec 13 03:53:06.784162 kubelet[1417]: W1213 03:53:06.784110 1417 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.24.4.199" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 03:53:06.784162 kubelet[1417]: E1213 03:53:06.784167 1417 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.199" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 03:53:06.784341 kubelet[1417]: I1213 03:53:06.784208 1417 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 03:53:06.797152 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 03:53:06.797355 kubelet[1417]: I1213 03:53:06.797281 1417 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 03:53:06.802730 kubelet[1417]: I1213 03:53:06.802633 1417 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 03:53:06.803412 kubelet[1417]: I1213 03:53:06.803379 1417 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 03:53:06.812364 kubelet[1417]: I1213 03:53:06.812311 1417 server.go:455] "Adding debug handlers to kubelet server" Dec 13 03:53:06.820504 kubelet[1417]: I1213 03:53:06.819111 1417 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 03:53:06.820504 kubelet[1417]: I1213 03:53:06.819789 1417 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 03:53:06.820504 kubelet[1417]: I1213 03:53:06.819915 1417 reconciler.go:26] "Reconciler: start to sync state" Dec 13 03:53:06.844019 kubelet[1417]: I1213 03:53:06.843915 1417 factory.go:221] Registration of the systemd container factory successfully Dec 13 03:53:06.844508 kubelet[1417]: I1213 03:53:06.844469 1417 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 03:53:06.866587 kubelet[1417]: I1213 03:53:06.866414 1417 factory.go:221] Registration of the containerd container factory successfully Dec 13 03:53:06.877044 kubelet[1417]: E1213 03:53:06.877002 1417 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 03:53:06.883718 kubelet[1417]: E1213 03:53:06.883646 1417 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.24.4.199\" not found" node="172.24.4.199" Dec 13 03:53:06.888233 kubelet[1417]: I1213 03:53:06.888204 1417 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 03:53:06.888233 kubelet[1417]: I1213 03:53:06.888221 1417 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 03:53:06.888233 kubelet[1417]: I1213 03:53:06.888237 1417 state_mem.go:36] "Initialized new in-memory state store" Dec 13 03:53:06.905462 kubelet[1417]: I1213 03:53:06.905439 1417 policy_none.go:49] "None policy: Start" Dec 13 03:53:06.906321 kubelet[1417]: I1213 03:53:06.906308 1417 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 03:53:06.906438 kubelet[1417]: I1213 03:53:06.906427 1417 state_mem.go:35] "Initializing new in-memory state store" Dec 13 03:53:06.921315 kubelet[1417]: I1213 03:53:06.921279 1417 kubelet_node_status.go:73] "Attempting to register node" node="172.24.4.199" Dec 13 03:53:06.924393 systemd[1]: Created slice kubepods.slice. Dec 13 03:53:06.929734 kubelet[1417]: I1213 03:53:06.929699 1417 kubelet_node_status.go:76] "Successfully registered node" node="172.24.4.199" Dec 13 03:53:06.932804 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 03:53:06.946218 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 03:53:06.959340 kubelet[1417]: E1213 03:53:06.950933 1417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.199\" not found" Dec 13 03:53:06.959340 kubelet[1417]: I1213 03:53:06.950935 1417 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 03:53:06.959340 kubelet[1417]: I1213 03:53:06.951616 1417 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 03:53:06.959340 kubelet[1417]: I1213 03:53:06.952035 1417 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 03:53:06.961183 kubelet[1417]: E1213 03:53:06.960958 1417 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.199\" not found" Dec 13 03:53:07.051456 kubelet[1417]: E1213 03:53:07.051397 1417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.199\" not found" Dec 13 03:53:07.068413 kubelet[1417]: I1213 03:53:07.068317 1417 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 03:53:07.070136 kubelet[1417]: I1213 03:53:07.070100 1417 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 03:53:07.070218 kubelet[1417]: I1213 03:53:07.070152 1417 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 03:53:07.070218 kubelet[1417]: I1213 03:53:07.070207 1417 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 03:53:07.070338 kubelet[1417]: E1213 03:53:07.070305 1417 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 03:53:07.152427 kubelet[1417]: E1213 03:53:07.152236 1417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.199\" not found" Dec 13 03:53:07.222850 sudo[1283]: pam_unix(sudo:session): session closed for user root Dec 13 03:53:07.253690 kubelet[1417]: E1213 03:53:07.253567 1417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.199\" not found" Dec 13 03:53:07.354037 kubelet[1417]: E1213 03:53:07.353885 1417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.199\" not found" Dec 13 03:53:07.394803 sshd[1269]: pam_unix(sshd:session): session closed for user core Dec 13 03:53:07.402354 systemd-logind[1131]: Session 7 logged out. Waiting for processes to exit. Dec 13 03:53:07.403398 systemd[1]: sshd@6-172.24.4.199:22-172.24.4.1:50874.service: Deactivated successfully. Dec 13 03:53:07.405619 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 03:53:07.406525 systemd[1]: session-7.scope: Consumed 1.007s CPU time. Dec 13 03:53:07.408886 systemd-logind[1131]: Removed session 7. Dec 13 03:53:07.454801 kubelet[1417]: E1213 03:53:07.454738 1417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.199\" not found" Dec 13 03:53:07.555422 kubelet[1417]: E1213 03:53:07.555352 1417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.199\" not found" Dec 13 03:53:07.604905 kubelet[1417]: I1213 03:53:07.604800 1417 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 03:53:07.605342 kubelet[1417]: W1213 03:53:07.605292 1417 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 03:53:07.605762 kubelet[1417]: W1213 03:53:07.605711 1417 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 03:53:07.657038 kubelet[1417]: E1213 03:53:07.656757 1417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.199\" not found" Dec 13 03:53:07.758455 kubelet[1417]: E1213 03:53:07.758328 1417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.199\" not found" Dec 13 03:53:07.766900 kubelet[1417]: E1213 03:53:07.766862 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:07.860773 kubelet[1417]: E1213 03:53:07.860629 1417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.199\" not found" Dec 13 03:53:07.961811 kubelet[1417]: E1213 03:53:07.961722 1417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.199\" not found" Dec 13 03:53:08.062677 kubelet[1417]: E1213 03:53:08.062618 1417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.199\" not found" Dec 13 03:53:08.163531 kubelet[1417]: E1213 03:53:08.163446 1417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.199\" not found" Dec 13 03:53:08.266105 kubelet[1417]: I1213 03:53:08.265678 1417 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 03:53:08.267936 env[1143]: time="2024-12-13T03:53:08.267480904Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 03:53:08.269178 kubelet[1417]: I1213 03:53:08.269138 1417 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 03:53:08.767590 kubelet[1417]: E1213 03:53:08.767518 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:08.768413 kubelet[1417]: I1213 03:53:08.767624 1417 apiserver.go:52] "Watching apiserver" Dec 13 03:53:08.792215 kubelet[1417]: I1213 03:53:08.792094 1417 topology_manager.go:215] "Topology Admit Handler" podUID="c4b8afc1-529b-46d6-bcd7-ad54eb092e8d" podNamespace="kube-system" podName="cilium-jphdx" Dec 13 03:53:08.792488 kubelet[1417]: I1213 03:53:08.792396 1417 topology_manager.go:215] "Topology Admit Handler" podUID="2088e35f-d775-479b-9b65-d77005cc00a8" podNamespace="kube-system" podName="kube-proxy-p55zt" Dec 13 03:53:08.806539 systemd[1]: Created slice kubepods-besteffort-pod2088e35f_d775_479b_9b65_d77005cc00a8.slice. Dec 13 03:53:08.826300 kubelet[1417]: I1213 03:53:08.825249 1417 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 03:53:08.828886 systemd[1]: Created slice kubepods-burstable-podc4b8afc1_529b_46d6_bcd7_ad54eb092e8d.slice. Dec 13 03:53:08.833296 kubelet[1417]: I1213 03:53:08.833228 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-bpf-maps\") pod \"cilium-jphdx\" (UID: \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\") " pod="kube-system/cilium-jphdx" Dec 13 03:53:08.833463 kubelet[1417]: I1213 03:53:08.833344 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-cilium-cgroup\") pod \"cilium-jphdx\" (UID: \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\") " pod="kube-system/cilium-jphdx" Dec 13 03:53:08.833463 kubelet[1417]: I1213 03:53:08.833435 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-etc-cni-netd\") pod \"cilium-jphdx\" (UID: \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\") " pod="kube-system/cilium-jphdx" Dec 13 03:53:08.833613 kubelet[1417]: I1213 03:53:08.833535 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2kg6\" (UniqueName: \"kubernetes.io/projected/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-kube-api-access-n2kg6\") pod \"cilium-jphdx\" (UID: \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\") " pod="kube-system/cilium-jphdx" Dec 13 03:53:08.833690 kubelet[1417]: I1213 03:53:08.833630 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlvth\" (UniqueName: \"kubernetes.io/projected/2088e35f-d775-479b-9b65-d77005cc00a8-kube-api-access-dlvth\") pod \"kube-proxy-p55zt\" (UID: \"2088e35f-d775-479b-9b65-d77005cc00a8\") " pod="kube-system/kube-proxy-p55zt" Dec 13 03:53:08.833773 kubelet[1417]: I1213 03:53:08.833716 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-xtables-lock\") pod \"cilium-jphdx\" (UID: \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\") " pod="kube-system/cilium-jphdx" Dec 13 03:53:08.833843 kubelet[1417]: I1213 03:53:08.833765 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-clustermesh-secrets\") pod \"cilium-jphdx\" (UID: \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\") " pod="kube-system/cilium-jphdx" Dec 13 03:53:08.833915 kubelet[1417]: I1213 03:53:08.833865 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-cilium-config-path\") pod \"cilium-jphdx\" (UID: \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\") " pod="kube-system/cilium-jphdx" Dec 13 03:53:08.834052 kubelet[1417]: I1213 03:53:08.833950 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-host-proc-sys-kernel\") pod \"cilium-jphdx\" (UID: \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\") " pod="kube-system/cilium-jphdx" Dec 13 03:53:08.834150 kubelet[1417]: I1213 03:53:08.834052 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2088e35f-d775-479b-9b65-d77005cc00a8-xtables-lock\") pod \"kube-proxy-p55zt\" (UID: \"2088e35f-d775-479b-9b65-d77005cc00a8\") " pod="kube-system/kube-proxy-p55zt" Dec 13 03:53:08.834150 kubelet[1417]: I1213 03:53:08.834138 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2088e35f-d775-479b-9b65-d77005cc00a8-lib-modules\") pod \"kube-proxy-p55zt\" (UID: \"2088e35f-d775-479b-9b65-d77005cc00a8\") " pod="kube-system/kube-proxy-p55zt" Dec 13 03:53:08.834294 kubelet[1417]: I1213 03:53:08.834225 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-cilium-run\") pod \"cilium-jphdx\" (UID: \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\") " pod="kube-system/cilium-jphdx" Dec 13 03:53:08.834466 kubelet[1417]: I1213 03:53:08.834273 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-host-proc-sys-net\") pod \"cilium-jphdx\" (UID: \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\") " pod="kube-system/cilium-jphdx" Dec 13 03:53:08.834636 kubelet[1417]: I1213 03:53:08.834574 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-hostproc\") pod \"cilium-jphdx\" (UID: \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\") " pod="kube-system/cilium-jphdx" Dec 13 03:53:08.834722 kubelet[1417]: I1213 03:53:08.834665 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-cni-path\") pod \"cilium-jphdx\" (UID: \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\") " pod="kube-system/cilium-jphdx" Dec 13 03:53:08.834800 kubelet[1417]: I1213 03:53:08.834755 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-lib-modules\") pod \"cilium-jphdx\" (UID: \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\") " pod="kube-system/cilium-jphdx" Dec 13 03:53:08.834874 kubelet[1417]: I1213 03:53:08.834840 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-hubble-tls\") pod \"cilium-jphdx\" (UID: \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\") " pod="kube-system/cilium-jphdx" Dec 13 03:53:08.834945 kubelet[1417]: I1213 03:53:08.834916 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2088e35f-d775-479b-9b65-d77005cc00a8-kube-proxy\") pod \"kube-proxy-p55zt\" (UID: \"2088e35f-d775-479b-9b65-d77005cc00a8\") " pod="kube-system/kube-proxy-p55zt" Dec 13 03:53:09.126893 env[1143]: time="2024-12-13T03:53:09.126539824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p55zt,Uid:2088e35f-d775-479b-9b65-d77005cc00a8,Namespace:kube-system,Attempt:0,}" Dec 13 03:53:09.144795 env[1143]: time="2024-12-13T03:53:09.144618077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jphdx,Uid:c4b8afc1-529b-46d6-bcd7-ad54eb092e8d,Namespace:kube-system,Attempt:0,}" Dec 13 03:53:09.769121 kubelet[1417]: E1213 03:53:09.769029 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:10.464506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4012504703.mount: Deactivated successfully. Dec 13 03:53:10.500722 env[1143]: time="2024-12-13T03:53:10.500607175Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:53:10.503959 env[1143]: time="2024-12-13T03:53:10.503897382Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:53:10.510803 env[1143]: time="2024-12-13T03:53:10.510723704Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:53:10.516412 env[1143]: time="2024-12-13T03:53:10.516324705Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:53:10.521444 env[1143]: time="2024-12-13T03:53:10.521361153Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:53:10.526411 env[1143]: time="2024-12-13T03:53:10.526338432Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:53:10.530865 env[1143]: time="2024-12-13T03:53:10.530797055Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:53:10.535579 env[1143]: time="2024-12-13T03:53:10.535437577Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:53:10.599465 env[1143]: time="2024-12-13T03:53:10.599291327Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:53:10.599465 env[1143]: time="2024-12-13T03:53:10.599339446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:53:10.599465 env[1143]: time="2024-12-13T03:53:10.599354184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:53:10.604926 env[1143]: time="2024-12-13T03:53:10.599962407Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/908d4d67c4853988c6238ab75f19ade03f612e2a9da7cad9925bd2e9b0feb918 pid=1468 runtime=io.containerd.runc.v2 Dec 13 03:53:10.609424 env[1143]: time="2024-12-13T03:53:10.609339118Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:53:10.609667 env[1143]: time="2024-12-13T03:53:10.609640609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:53:10.609783 env[1143]: time="2024-12-13T03:53:10.609760552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:53:10.610075 env[1143]: time="2024-12-13T03:53:10.610047507Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/101f75761f22ca95388a2610a12d309b2150c3f3547525c62b380f079f0c452f pid=1483 runtime=io.containerd.runc.v2 Dec 13 03:53:10.623258 systemd[1]: Started cri-containerd-908d4d67c4853988c6238ab75f19ade03f612e2a9da7cad9925bd2e9b0feb918.scope. Dec 13 03:53:10.636989 systemd[1]: Started cri-containerd-101f75761f22ca95388a2610a12d309b2150c3f3547525c62b380f079f0c452f.scope. Dec 13 03:53:10.681911 env[1143]: time="2024-12-13T03:53:10.681854577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jphdx,Uid:c4b8afc1-529b-46d6-bcd7-ad54eb092e8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"908d4d67c4853988c6238ab75f19ade03f612e2a9da7cad9925bd2e9b0feb918\"" Dec 13 03:53:10.687583 env[1143]: time="2024-12-13T03:53:10.687546989Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 03:53:10.693866 env[1143]: time="2024-12-13T03:53:10.693821573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p55zt,Uid:2088e35f-d775-479b-9b65-d77005cc00a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"101f75761f22ca95388a2610a12d309b2150c3f3547525c62b380f079f0c452f\"" Dec 13 03:53:10.772373 kubelet[1417]: E1213 03:53:10.769659 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:11.770572 kubelet[1417]: E1213 03:53:11.770448 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:12.770741 kubelet[1417]: E1213 03:53:12.770667 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:12.785948 update_engine[1132]: I1213 03:53:12.785028 1132 update_attempter.cc:509] Updating boot flags... Dec 13 03:53:13.772187 kubelet[1417]: E1213 03:53:13.772007 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:14.772942 kubelet[1417]: E1213 03:53:14.772869 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:15.773670 kubelet[1417]: E1213 03:53:15.773572 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:16.774771 kubelet[1417]: E1213 03:53:16.774711 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:17.775563 kubelet[1417]: E1213 03:53:17.775509 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:18.779195 kubelet[1417]: E1213 03:53:18.778913 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:19.780276 kubelet[1417]: E1213 03:53:19.780200 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:20.053275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount528830371.mount: Deactivated successfully. Dec 13 03:53:20.780944 kubelet[1417]: E1213 03:53:20.780840 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:21.782176 kubelet[1417]: E1213 03:53:21.782095 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:22.783393 kubelet[1417]: E1213 03:53:22.783290 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:23.784618 kubelet[1417]: E1213 03:53:23.784460 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:24.785249 kubelet[1417]: E1213 03:53:24.785193 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:25.517219 env[1143]: time="2024-12-13T03:53:25.516969274Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:53:25.525762 env[1143]: time="2024-12-13T03:53:25.525665006Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:53:25.530426 env[1143]: time="2024-12-13T03:53:25.530313593Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:53:25.532444 env[1143]: time="2024-12-13T03:53:25.532339849Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 03:53:25.538989 env[1143]: time="2024-12-13T03:53:25.538875003Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 03:53:25.549347 env[1143]: time="2024-12-13T03:53:25.549245244Z" level=info msg="CreateContainer within sandbox \"908d4d67c4853988c6238ab75f19ade03f612e2a9da7cad9925bd2e9b0feb918\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 03:53:25.602686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount611723644.mount: Deactivated successfully. Dec 13 03:53:25.620498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount970152123.mount: Deactivated successfully. Dec 13 03:53:25.670359 env[1143]: time="2024-12-13T03:53:25.670274331Z" level=info msg="CreateContainer within sandbox \"908d4d67c4853988c6238ab75f19ade03f612e2a9da7cad9925bd2e9b0feb918\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0e2c8a392f9598731f9eef9f799c892fb7993938b59184533700f3aef0d37836\"" Dec 13 03:53:25.672193 env[1143]: time="2024-12-13T03:53:25.672119490Z" level=info msg="StartContainer for \"0e2c8a392f9598731f9eef9f799c892fb7993938b59184533700f3aef0d37836\"" Dec 13 03:53:25.730821 systemd[1]: Started cri-containerd-0e2c8a392f9598731f9eef9f799c892fb7993938b59184533700f3aef0d37836.scope. Dec 13 03:53:25.787472 kubelet[1417]: E1213 03:53:25.786546 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:25.789299 env[1143]: time="2024-12-13T03:53:25.789258217Z" level=info msg="StartContainer for \"0e2c8a392f9598731f9eef9f799c892fb7993938b59184533700f3aef0d37836\" returns successfully" Dec 13 03:53:25.792908 systemd[1]: cri-containerd-0e2c8a392f9598731f9eef9f799c892fb7993938b59184533700f3aef0d37836.scope: Deactivated successfully. Dec 13 03:53:26.533310 env[1143]: time="2024-12-13T03:53:26.533089395Z" level=info msg="shim disconnected" id=0e2c8a392f9598731f9eef9f799c892fb7993938b59184533700f3aef0d37836 Dec 13 03:53:26.533310 env[1143]: time="2024-12-13T03:53:26.533223006Z" level=warning msg="cleaning up after shim disconnected" id=0e2c8a392f9598731f9eef9f799c892fb7993938b59184533700f3aef0d37836 namespace=k8s.io Dec 13 03:53:26.533310 env[1143]: time="2024-12-13T03:53:26.533250005Z" level=info msg="cleaning up dead shim" Dec 13 03:53:26.557360 env[1143]: time="2024-12-13T03:53:26.557256106Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:53:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1611 runtime=io.containerd.runc.v2\n" Dec 13 03:53:26.593361 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e2c8a392f9598731f9eef9f799c892fb7993938b59184533700f3aef0d37836-rootfs.mount: Deactivated successfully. Dec 13 03:53:26.765793 kubelet[1417]: E1213 03:53:26.765698 1417 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:26.787037 kubelet[1417]: E1213 03:53:26.786791 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:27.284253 env[1143]: time="2024-12-13T03:53:27.284205963Z" level=info msg="CreateContainer within sandbox \"908d4d67c4853988c6238ab75f19ade03f612e2a9da7cad9925bd2e9b0feb918\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 03:53:27.625394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3105799202.mount: Deactivated successfully. Dec 13 03:53:27.643335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2713988132.mount: Deactivated successfully. Dec 13 03:53:27.669114 env[1143]: time="2024-12-13T03:53:27.669022805Z" level=info msg="CreateContainer within sandbox \"908d4d67c4853988c6238ab75f19ade03f612e2a9da7cad9925bd2e9b0feb918\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"58c9519d80f60280d23e0e4c11f131f29e8c624e938caa00129ab07de6ffb4ca\"" Dec 13 03:53:27.671369 env[1143]: time="2024-12-13T03:53:27.671314279Z" level=info msg="StartContainer for \"58c9519d80f60280d23e0e4c11f131f29e8c624e938caa00129ab07de6ffb4ca\"" Dec 13 03:53:27.720848 systemd[1]: Started cri-containerd-58c9519d80f60280d23e0e4c11f131f29e8c624e938caa00129ab07de6ffb4ca.scope. Dec 13 03:53:27.788027 kubelet[1417]: E1213 03:53:27.787916 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:27.789178 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 03:53:27.789441 systemd[1]: Stopped systemd-sysctl.service. Dec 13 03:53:27.789620 systemd[1]: Stopping systemd-sysctl.service... Dec 13 03:53:27.791642 systemd[1]: Starting systemd-sysctl.service... Dec 13 03:53:27.799419 systemd[1]: cri-containerd-58c9519d80f60280d23e0e4c11f131f29e8c624e938caa00129ab07de6ffb4ca.scope: Deactivated successfully. Dec 13 03:53:27.804630 env[1143]: time="2024-12-13T03:53:27.804574962Z" level=info msg="StartContainer for \"58c9519d80f60280d23e0e4c11f131f29e8c624e938caa00129ab07de6ffb4ca\" returns successfully" Dec 13 03:53:27.805193 systemd[1]: Finished systemd-sysctl.service. Dec 13 03:53:28.004168 env[1143]: time="2024-12-13T03:53:28.004052208Z" level=info msg="shim disconnected" id=58c9519d80f60280d23e0e4c11f131f29e8c624e938caa00129ab07de6ffb4ca Dec 13 03:53:28.004823 env[1143]: time="2024-12-13T03:53:28.004748109Z" level=warning msg="cleaning up after shim disconnected" id=58c9519d80f60280d23e0e4c11f131f29e8c624e938caa00129ab07de6ffb4ca namespace=k8s.io Dec 13 03:53:28.005064 env[1143]: time="2024-12-13T03:53:28.005023124Z" level=info msg="cleaning up dead shim" Dec 13 03:53:28.037672 env[1143]: time="2024-12-13T03:53:28.037586484Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:53:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1673 runtime=io.containerd.runc.v2\n" Dec 13 03:53:28.293123 env[1143]: time="2024-12-13T03:53:28.292884545Z" level=info msg="CreateContainer within sandbox \"908d4d67c4853988c6238ab75f19ade03f612e2a9da7cad9925bd2e9b0feb918\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 03:53:28.359105 env[1143]: time="2024-12-13T03:53:28.359024479Z" level=info msg="CreateContainer within sandbox \"908d4d67c4853988c6238ab75f19ade03f612e2a9da7cad9925bd2e9b0feb918\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1f4099dc5cabcef8cb107513f2ea1d3683b33d615d35d3629a6e8864ae7bf3ce\"" Dec 13 03:53:28.359753 env[1143]: time="2024-12-13T03:53:28.359677109Z" level=info msg="StartContainer for \"1f4099dc5cabcef8cb107513f2ea1d3683b33d615d35d3629a6e8864ae7bf3ce\"" Dec 13 03:53:28.393417 systemd[1]: Started cri-containerd-1f4099dc5cabcef8cb107513f2ea1d3683b33d615d35d3629a6e8864ae7bf3ce.scope. Dec 13 03:53:28.437068 systemd[1]: cri-containerd-1f4099dc5cabcef8cb107513f2ea1d3683b33d615d35d3629a6e8864ae7bf3ce.scope: Deactivated successfully. Dec 13 03:53:28.443736 env[1143]: time="2024-12-13T03:53:28.443665353Z" level=info msg="StartContainer for \"1f4099dc5cabcef8cb107513f2ea1d3683b33d615d35d3629a6e8864ae7bf3ce\" returns successfully" Dec 13 03:53:28.617310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58c9519d80f60280d23e0e4c11f131f29e8c624e938caa00129ab07de6ffb4ca-rootfs.mount: Deactivated successfully. Dec 13 03:53:28.617453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2634400203.mount: Deactivated successfully. Dec 13 03:53:28.716008 env[1143]: time="2024-12-13T03:53:28.715823216Z" level=info msg="shim disconnected" id=1f4099dc5cabcef8cb107513f2ea1d3683b33d615d35d3629a6e8864ae7bf3ce Dec 13 03:53:28.716008 env[1143]: time="2024-12-13T03:53:28.716004545Z" level=warning msg="cleaning up after shim disconnected" id=1f4099dc5cabcef8cb107513f2ea1d3683b33d615d35d3629a6e8864ae7bf3ce namespace=k8s.io Dec 13 03:53:28.717185 env[1143]: time="2024-12-13T03:53:28.716041984Z" level=info msg="cleaning up dead shim" Dec 13 03:53:28.738170 env[1143]: time="2024-12-13T03:53:28.738108741Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:53:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1733 runtime=io.containerd.runc.v2\n" Dec 13 03:53:28.788558 kubelet[1417]: E1213 03:53:28.788516 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:29.300206 env[1143]: time="2024-12-13T03:53:29.300120232Z" level=info msg="CreateContainer within sandbox \"908d4d67c4853988c6238ab75f19ade03f612e2a9da7cad9925bd2e9b0feb918\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 03:53:29.336325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2443293918.mount: Deactivated successfully. Dec 13 03:53:29.360782 env[1143]: time="2024-12-13T03:53:29.360646955Z" level=info msg="CreateContainer within sandbox \"908d4d67c4853988c6238ab75f19ade03f612e2a9da7cad9925bd2e9b0feb918\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cc29cad59e47ecc8d098ede7dc36228170cac2839a5d145fb1c887e0418f1a96\"" Dec 13 03:53:29.363364 env[1143]: time="2024-12-13T03:53:29.363275991Z" level=info msg="StartContainer for \"cc29cad59e47ecc8d098ede7dc36228170cac2839a5d145fb1c887e0418f1a96\"" Dec 13 03:53:29.406160 systemd[1]: Started cri-containerd-cc29cad59e47ecc8d098ede7dc36228170cac2839a5d145fb1c887e0418f1a96.scope. Dec 13 03:53:29.446605 systemd[1]: cri-containerd-cc29cad59e47ecc8d098ede7dc36228170cac2839a5d145fb1c887e0418f1a96.scope: Deactivated successfully. Dec 13 03:53:29.449743 env[1143]: time="2024-12-13T03:53:29.449497127Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4b8afc1_529b_46d6_bcd7_ad54eb092e8d.slice/cri-containerd-cc29cad59e47ecc8d098ede7dc36228170cac2839a5d145fb1c887e0418f1a96.scope/memory.events\": no such file or directory" Dec 13 03:53:29.456214 env[1143]: time="2024-12-13T03:53:29.456173259Z" level=info msg="StartContainer for \"cc29cad59e47ecc8d098ede7dc36228170cac2839a5d145fb1c887e0418f1a96\" returns successfully" Dec 13 03:53:29.463074 env[1143]: time="2024-12-13T03:53:29.463025562Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:53:29.466702 env[1143]: time="2024-12-13T03:53:29.466646142Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:53:29.468916 env[1143]: time="2024-12-13T03:53:29.468881472Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:53:29.470443 env[1143]: time="2024-12-13T03:53:29.470413566Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:53:29.471203 env[1143]: time="2024-12-13T03:53:29.471160754Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 03:53:29.474774 env[1143]: time="2024-12-13T03:53:29.474725288Z" level=info msg="CreateContainer within sandbox \"101f75761f22ca95388a2610a12d309b2150c3f3547525c62b380f079f0c452f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 03:53:29.619216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc29cad59e47ecc8d098ede7dc36228170cac2839a5d145fb1c887e0418f1a96-rootfs.mount: Deactivated successfully. Dec 13 03:53:29.789550 kubelet[1417]: E1213 03:53:29.789471 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:29.834733 env[1143]: time="2024-12-13T03:53:29.834641810Z" level=info msg="shim disconnected" id=cc29cad59e47ecc8d098ede7dc36228170cac2839a5d145fb1c887e0418f1a96 Dec 13 03:53:29.835584 env[1143]: time="2024-12-13T03:53:29.835535561Z" level=warning msg="cleaning up after shim disconnected" id=cc29cad59e47ecc8d098ede7dc36228170cac2839a5d145fb1c887e0418f1a96 namespace=k8s.io Dec 13 03:53:29.835812 env[1143]: time="2024-12-13T03:53:29.835774248Z" level=info msg="cleaning up dead shim" Dec 13 03:53:29.848856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1881357830.mount: Deactivated successfully. Dec 13 03:53:29.866198 env[1143]: time="2024-12-13T03:53:29.866111632Z" level=info msg="CreateContainer within sandbox \"101f75761f22ca95388a2610a12d309b2150c3f3547525c62b380f079f0c452f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c730684546318e1a69bac8e4ee13c720bbfa31ab42cadfef6259585bd60b1fc7\"" Dec 13 03:53:29.868228 env[1143]: time="2024-12-13T03:53:29.868103056Z" level=info msg="StartContainer for \"c730684546318e1a69bac8e4ee13c720bbfa31ab42cadfef6259585bd60b1fc7\"" Dec 13 03:53:29.893220 env[1143]: time="2024-12-13T03:53:29.893139498Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:53:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1790 runtime=io.containerd.runc.v2\n" Dec 13 03:53:29.923980 systemd[1]: Started cri-containerd-c730684546318e1a69bac8e4ee13c720bbfa31ab42cadfef6259585bd60b1fc7.scope. Dec 13 03:53:29.973504 env[1143]: time="2024-12-13T03:53:29.973437779Z" level=info msg="StartContainer for \"c730684546318e1a69bac8e4ee13c720bbfa31ab42cadfef6259585bd60b1fc7\" returns successfully" Dec 13 03:53:30.308252 env[1143]: time="2024-12-13T03:53:30.307824876Z" level=info msg="CreateContainer within sandbox \"908d4d67c4853988c6238ab75f19ade03f612e2a9da7cad9925bd2e9b0feb918\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 03:53:30.352453 env[1143]: time="2024-12-13T03:53:30.352201149Z" level=info msg="CreateContainer within sandbox \"908d4d67c4853988c6238ab75f19ade03f612e2a9da7cad9925bd2e9b0feb918\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f5849cfacb4eb07a7652df5a1024d47d8d1a0c74e34fb4cd909c1b5c7117cea7\"" Dec 13 03:53:30.353858 env[1143]: time="2024-12-13T03:53:30.353805529Z" level=info msg="StartContainer for \"f5849cfacb4eb07a7652df5a1024d47d8d1a0c74e34fb4cd909c1b5c7117cea7\"" Dec 13 03:53:30.381318 kubelet[1417]: I1213 03:53:30.381196 1417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p55zt" podStartSLOduration=5.603572488 podStartE2EDuration="24.381168568s" podCreationTimestamp="2024-12-13 03:53:06 +0000 UTC" firstStartedPulling="2024-12-13 03:53:10.69513528 +0000 UTC m=+5.036011237" lastFinishedPulling="2024-12-13 03:53:29.47273136 +0000 UTC m=+23.813607317" observedRunningTime="2024-12-13 03:53:30.323107133 +0000 UTC m=+24.663983090" watchObservedRunningTime="2024-12-13 03:53:30.381168568 +0000 UTC m=+24.722044525" Dec 13 03:53:30.385824 systemd[1]: Started cri-containerd-f5849cfacb4eb07a7652df5a1024d47d8d1a0c74e34fb4cd909c1b5c7117cea7.scope. Dec 13 03:53:30.454376 env[1143]: time="2024-12-13T03:53:30.454253346Z" level=info msg="StartContainer for \"f5849cfacb4eb07a7652df5a1024d47d8d1a0c74e34fb4cd909c1b5c7117cea7\" returns successfully" Dec 13 03:53:30.627552 kubelet[1417]: I1213 03:53:30.627300 1417 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 03:53:30.791351 kubelet[1417]: E1213 03:53:30.791237 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:31.054098 kernel: Initializing XFRM netlink socket Dec 13 03:53:31.357758 kubelet[1417]: I1213 03:53:31.357365 1417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jphdx" podStartSLOduration=10.507571042 podStartE2EDuration="25.357327424s" podCreationTimestamp="2024-12-13 03:53:06 +0000 UTC" firstStartedPulling="2024-12-13 03:53:10.68684471 +0000 UTC m=+5.027720668" lastFinishedPulling="2024-12-13 03:53:25.536601043 +0000 UTC m=+19.877477050" observedRunningTime="2024-12-13 03:53:31.350704127 +0000 UTC m=+25.691580124" watchObservedRunningTime="2024-12-13 03:53:31.357327424 +0000 UTC m=+25.698203431" Dec 13 03:53:31.792289 kubelet[1417]: E1213 03:53:31.792127 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:32.792960 kubelet[1417]: E1213 03:53:32.792808 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:32.832210 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 03:53:32.832505 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 03:53:32.835393 systemd-networkd[974]: cilium_host: Link UP Dec 13 03:53:32.838797 systemd-networkd[974]: cilium_net: Link UP Dec 13 03:53:32.840435 systemd-networkd[974]: cilium_net: Gained carrier Dec 13 03:53:32.843573 systemd-networkd[974]: cilium_host: Gained carrier Dec 13 03:53:32.956362 systemd-networkd[974]: cilium_vxlan: Link UP Dec 13 03:53:32.956381 systemd-networkd[974]: cilium_vxlan: Gained carrier Dec 13 03:53:33.014225 systemd-networkd[974]: cilium_host: Gained IPv6LL Dec 13 03:53:33.238282 systemd-networkd[974]: cilium_net: Gained IPv6LL Dec 13 03:53:33.268015 kernel: NET: Registered PF_ALG protocol family Dec 13 03:53:33.794790 kubelet[1417]: E1213 03:53:33.794704 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:34.134741 systemd-networkd[974]: cilium_vxlan: Gained IPv6LL Dec 13 03:53:34.154804 kubelet[1417]: I1213 03:53:34.154628 1417 topology_manager.go:215] "Topology Admit Handler" podUID="e7e7abdc-90f2-4fcf-b566-2070e96f30e6" podNamespace="default" podName="nginx-deployment-85f456d6dd-jhtt2" Dec 13 03:53:34.176836 systemd[1]: Created slice kubepods-besteffort-pode7e7abdc_90f2_4fcf_b566_2070e96f30e6.slice. Dec 13 03:53:34.221750 kubelet[1417]: I1213 03:53:34.221678 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grsls\" (UniqueName: \"kubernetes.io/projected/e7e7abdc-90f2-4fcf-b566-2070e96f30e6-kube-api-access-grsls\") pod \"nginx-deployment-85f456d6dd-jhtt2\" (UID: \"e7e7abdc-90f2-4fcf-b566-2070e96f30e6\") " pod="default/nginx-deployment-85f456d6dd-jhtt2" Dec 13 03:53:34.269850 systemd-networkd[974]: lxc_health: Link UP Dec 13 03:53:34.294071 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 03:53:34.293808 systemd-networkd[974]: lxc_health: Gained carrier Dec 13 03:53:34.486539 env[1143]: time="2024-12-13T03:53:34.486423974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-jhtt2,Uid:e7e7abdc-90f2-4fcf-b566-2070e96f30e6,Namespace:default,Attempt:0,}" Dec 13 03:53:34.586259 systemd-networkd[974]: lxc528374d0b96c: Link UP Dec 13 03:53:34.590699 kernel: eth0: renamed from tmp8c95d Dec 13 03:53:34.594061 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc528374d0b96c: link becomes ready Dec 13 03:53:34.594397 systemd-networkd[974]: lxc528374d0b96c: Gained carrier Dec 13 03:53:34.795670 kubelet[1417]: E1213 03:53:34.795559 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:35.478206 systemd-networkd[974]: lxc_health: Gained IPv6LL Dec 13 03:53:35.796226 kubelet[1417]: E1213 03:53:35.796090 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:35.926371 systemd-networkd[974]: lxc528374d0b96c: Gained IPv6LL Dec 13 03:53:36.797409 kubelet[1417]: E1213 03:53:36.797269 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:37.798504 kubelet[1417]: E1213 03:53:37.798444 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:38.799410 kubelet[1417]: E1213 03:53:38.799341 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:39.182302 env[1143]: time="2024-12-13T03:53:39.181950380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:53:39.182302 env[1143]: time="2024-12-13T03:53:39.182025761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:53:39.182302 env[1143]: time="2024-12-13T03:53:39.182038845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:53:39.182302 env[1143]: time="2024-12-13T03:53:39.182246754Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8c95db40fc6aa04ace0b6c1324cdc9b06fa13955940a5257f0fd870fc1b38860 pid=2474 runtime=io.containerd.runc.v2 Dec 13 03:53:39.200351 systemd[1]: Started cri-containerd-8c95db40fc6aa04ace0b6c1324cdc9b06fa13955940a5257f0fd870fc1b38860.scope. Dec 13 03:53:39.263284 env[1143]: time="2024-12-13T03:53:39.263220841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-jhtt2,Uid:e7e7abdc-90f2-4fcf-b566-2070e96f30e6,Namespace:default,Attempt:0,} returns sandbox id \"8c95db40fc6aa04ace0b6c1324cdc9b06fa13955940a5257f0fd870fc1b38860\"" Dec 13 03:53:39.265206 env[1143]: time="2024-12-13T03:53:39.265176210Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 03:53:39.800420 kubelet[1417]: E1213 03:53:39.800313 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:40.801127 kubelet[1417]: E1213 03:53:40.801059 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:41.802227 kubelet[1417]: E1213 03:53:41.802175 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:42.802630 kubelet[1417]: E1213 03:53:42.802569 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:43.697164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2342280775.mount: Deactivated successfully. Dec 13 03:53:43.802941 kubelet[1417]: E1213 03:53:43.802859 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:44.803687 kubelet[1417]: E1213 03:53:44.803618 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:45.803900 kubelet[1417]: E1213 03:53:45.803833 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:45.988088 env[1143]: time="2024-12-13T03:53:45.988009578Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:53:46.007326 env[1143]: time="2024-12-13T03:53:46.007180794Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:53:46.012742 env[1143]: time="2024-12-13T03:53:46.012639592Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:53:46.021176 env[1143]: time="2024-12-13T03:53:46.018803149Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:53:46.021465 env[1143]: time="2024-12-13T03:53:46.021358973Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 03:53:46.029288 env[1143]: time="2024-12-13T03:53:46.029194090Z" level=info msg="CreateContainer within sandbox \"8c95db40fc6aa04ace0b6c1324cdc9b06fa13955940a5257f0fd870fc1b38860\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 03:53:46.067780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3638578386.mount: Deactivated successfully. Dec 13 03:53:46.086913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2682798528.mount: Deactivated successfully. Dec 13 03:53:46.099070 env[1143]: time="2024-12-13T03:53:46.099031538Z" level=info msg="CreateContainer within sandbox \"8c95db40fc6aa04ace0b6c1324cdc9b06fa13955940a5257f0fd870fc1b38860\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"d6ef2c302f1a769b18d2abdd8cac4952f2b98238fab87aee58c7515e0356f045\"" Dec 13 03:53:46.099981 env[1143]: time="2024-12-13T03:53:46.099941311Z" level=info msg="StartContainer for \"d6ef2c302f1a769b18d2abdd8cac4952f2b98238fab87aee58c7515e0356f045\"" Dec 13 03:53:46.137684 systemd[1]: Started cri-containerd-d6ef2c302f1a769b18d2abdd8cac4952f2b98238fab87aee58c7515e0356f045.scope. Dec 13 03:53:46.187837 env[1143]: time="2024-12-13T03:53:46.187801346Z" level=info msg="StartContainer for \"d6ef2c302f1a769b18d2abdd8cac4952f2b98238fab87aee58c7515e0356f045\" returns successfully" Dec 13 03:53:46.766501 kubelet[1417]: E1213 03:53:46.766444 1417 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:46.804798 kubelet[1417]: E1213 03:53:46.804680 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:47.805752 kubelet[1417]: E1213 03:53:47.805690 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:48.806730 kubelet[1417]: E1213 03:53:48.806669 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:49.808284 kubelet[1417]: E1213 03:53:49.808190 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:50.809552 kubelet[1417]: E1213 03:53:50.809427 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:51.809707 kubelet[1417]: E1213 03:53:51.809608 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:52.810870 kubelet[1417]: E1213 03:53:52.810772 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:53.812931 kubelet[1417]: E1213 03:53:53.812774 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:54.814080 kubelet[1417]: E1213 03:53:54.814004 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:55.815748 kubelet[1417]: E1213 03:53:55.815677 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:56.817053 kubelet[1417]: E1213 03:53:56.816997 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:57.818303 kubelet[1417]: E1213 03:53:57.818231 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:58.177820 kubelet[1417]: I1213 03:53:58.177558 1417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-jhtt2" podStartSLOduration=17.416001083 podStartE2EDuration="24.177521658s" podCreationTimestamp="2024-12-13 03:53:34 +0000 UTC" firstStartedPulling="2024-12-13 03:53:39.264487381 +0000 UTC m=+33.605363338" lastFinishedPulling="2024-12-13 03:53:46.026007906 +0000 UTC m=+40.366883913" observedRunningTime="2024-12-13 03:53:46.456415101 +0000 UTC m=+40.797291109" watchObservedRunningTime="2024-12-13 03:53:58.177521658 +0000 UTC m=+52.518397656" Dec 13 03:53:58.178233 kubelet[1417]: I1213 03:53:58.177827 1417 topology_manager.go:215] "Topology Admit Handler" podUID="03bfb449-eadd-4dae-8395-a396e1d12cdd" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 03:53:58.192722 systemd[1]: Created slice kubepods-besteffort-pod03bfb449_eadd_4dae_8395_a396e1d12cdd.slice. Dec 13 03:53:58.200836 kubelet[1417]: I1213 03:53:58.200787 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84b85\" (UniqueName: \"kubernetes.io/projected/03bfb449-eadd-4dae-8395-a396e1d12cdd-kube-api-access-84b85\") pod \"nfs-server-provisioner-0\" (UID: \"03bfb449-eadd-4dae-8395-a396e1d12cdd\") " pod="default/nfs-server-provisioner-0" Dec 13 03:53:58.201237 kubelet[1417]: I1213 03:53:58.201197 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/03bfb449-eadd-4dae-8395-a396e1d12cdd-data\") pod \"nfs-server-provisioner-0\" (UID: \"03bfb449-eadd-4dae-8395-a396e1d12cdd\") " pod="default/nfs-server-provisioner-0" Dec 13 03:53:58.503878 env[1143]: time="2024-12-13T03:53:58.503614904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:03bfb449-eadd-4dae-8395-a396e1d12cdd,Namespace:default,Attempt:0,}" Dec 13 03:53:58.616515 systemd-networkd[974]: lxc9d8a4f469ddd: Link UP Dec 13 03:53:58.630201 kernel: eth0: renamed from tmp69e65 Dec 13 03:53:58.641843 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 03:53:58.642101 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9d8a4f469ddd: link becomes ready Dec 13 03:53:58.641856 systemd-networkd[974]: lxc9d8a4f469ddd: Gained carrier Dec 13 03:53:58.820168 kubelet[1417]: E1213 03:53:58.819876 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:53:59.137378 env[1143]: time="2024-12-13T03:53:59.136759585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:53:59.137814 env[1143]: time="2024-12-13T03:53:59.136884179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:53:59.137814 env[1143]: time="2024-12-13T03:53:59.136928682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:53:59.137814 env[1143]: time="2024-12-13T03:53:59.137369397Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/69e6587c352d3864397b82f87cbb1ad637a4df5d289f4fa04d6afc14e3034814 pid=2598 runtime=io.containerd.runc.v2 Dec 13 03:53:59.199813 systemd[1]: Started cri-containerd-69e6587c352d3864397b82f87cbb1ad637a4df5d289f4fa04d6afc14e3034814.scope. Dec 13 03:53:59.265545 env[1143]: time="2024-12-13T03:53:59.265447612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:03bfb449-eadd-4dae-8395-a396e1d12cdd,Namespace:default,Attempt:0,} returns sandbox id \"69e6587c352d3864397b82f87cbb1ad637a4df5d289f4fa04d6afc14e3034814\"" Dec 13 03:53:59.268347 env[1143]: time="2024-12-13T03:53:59.267886239Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 03:53:59.798693 systemd-networkd[974]: lxc9d8a4f469ddd: Gained IPv6LL Dec 13 03:53:59.820677 kubelet[1417]: E1213 03:53:59.820586 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:00.821791 kubelet[1417]: E1213 03:54:00.821734 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:01.822678 kubelet[1417]: E1213 03:54:01.822595 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:02.823276 kubelet[1417]: E1213 03:54:02.823215 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:03.756813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2324584897.mount: Deactivated successfully. Dec 13 03:54:03.824087 kubelet[1417]: E1213 03:54:03.824035 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:04.825237 kubelet[1417]: E1213 03:54:04.825172 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:05.825411 kubelet[1417]: E1213 03:54:05.825345 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:06.766639 kubelet[1417]: E1213 03:54:06.766546 1417 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:06.825946 kubelet[1417]: E1213 03:54:06.825863 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:07.826601 kubelet[1417]: E1213 03:54:07.826526 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:08.674158 env[1143]: time="2024-12-13T03:54:08.673854369Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:54:08.682410 env[1143]: time="2024-12-13T03:54:08.682331513Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:54:08.688475 env[1143]: time="2024-12-13T03:54:08.688411886Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:54:08.693927 env[1143]: time="2024-12-13T03:54:08.693841775Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:54:08.696171 env[1143]: time="2024-12-13T03:54:08.696097902Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 03:54:08.705070 env[1143]: time="2024-12-13T03:54:08.704946239Z" level=info msg="CreateContainer within sandbox \"69e6587c352d3864397b82f87cbb1ad637a4df5d289f4fa04d6afc14e3034814\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 03:54:08.827869 kubelet[1417]: E1213 03:54:08.827723 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:09.183083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3865009853.mount: Deactivated successfully. Dec 13 03:54:09.199096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1668353196.mount: Deactivated successfully. Dec 13 03:54:09.222256 env[1143]: time="2024-12-13T03:54:09.222166003Z" level=info msg="CreateContainer within sandbox \"69e6587c352d3864397b82f87cbb1ad637a4df5d289f4fa04d6afc14e3034814\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"b17366e360b2f8ab7d0604d738d3b4ed8645da71d1b6f46260e127542df789b8\"" Dec 13 03:54:09.223952 env[1143]: time="2024-12-13T03:54:09.223868810Z" level=info msg="StartContainer for \"b17366e360b2f8ab7d0604d738d3b4ed8645da71d1b6f46260e127542df789b8\"" Dec 13 03:54:09.269367 systemd[1]: Started cri-containerd-b17366e360b2f8ab7d0604d738d3b4ed8645da71d1b6f46260e127542df789b8.scope. Dec 13 03:54:09.320178 env[1143]: time="2024-12-13T03:54:09.320125053Z" level=info msg="StartContainer for \"b17366e360b2f8ab7d0604d738d3b4ed8645da71d1b6f46260e127542df789b8\" returns successfully" Dec 13 03:54:09.596156 kubelet[1417]: I1213 03:54:09.596008 1417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.163458918 podStartE2EDuration="11.595930135s" podCreationTimestamp="2024-12-13 03:53:58 +0000 UTC" firstStartedPulling="2024-12-13 03:53:59.267564958 +0000 UTC m=+53.608440915" lastFinishedPulling="2024-12-13 03:54:08.700036125 +0000 UTC m=+63.040912132" observedRunningTime="2024-12-13 03:54:09.595041652 +0000 UTC m=+63.935917649" watchObservedRunningTime="2024-12-13 03:54:09.595930135 +0000 UTC m=+63.936806142" Dec 13 03:54:09.828870 kubelet[1417]: E1213 03:54:09.828755 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:10.829448 kubelet[1417]: E1213 03:54:10.829384 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:11.830864 kubelet[1417]: E1213 03:54:11.830763 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:12.831059 kubelet[1417]: E1213 03:54:12.830945 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:13.832202 kubelet[1417]: E1213 03:54:13.832114 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:14.832548 kubelet[1417]: E1213 03:54:14.832484 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:15.834346 kubelet[1417]: E1213 03:54:15.834271 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:16.836234 kubelet[1417]: E1213 03:54:16.836161 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:17.838062 kubelet[1417]: E1213 03:54:17.837873 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:18.839186 kubelet[1417]: E1213 03:54:18.839062 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:19.149114 kubelet[1417]: I1213 03:54:19.148337 1417 topology_manager.go:215] "Topology Admit Handler" podUID="af963baf-bc8c-461f-83d1-37dc973f04c8" podNamespace="default" podName="test-pod-1" Dec 13 03:54:19.164169 systemd[1]: Created slice kubepods-besteffort-podaf963baf_bc8c_461f_83d1_37dc973f04c8.slice. Dec 13 03:54:19.264360 kubelet[1417]: I1213 03:54:19.264271 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5232fdc0-2478-40fd-b47d-19923fe473eb\" (UniqueName: \"kubernetes.io/nfs/af963baf-bc8c-461f-83d1-37dc973f04c8-pvc-5232fdc0-2478-40fd-b47d-19923fe473eb\") pod \"test-pod-1\" (UID: \"af963baf-bc8c-461f-83d1-37dc973f04c8\") " pod="default/test-pod-1" Dec 13 03:54:19.265247 kubelet[1417]: I1213 03:54:19.265204 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r2th\" (UniqueName: \"kubernetes.io/projected/af963baf-bc8c-461f-83d1-37dc973f04c8-kube-api-access-6r2th\") pod \"test-pod-1\" (UID: \"af963baf-bc8c-461f-83d1-37dc973f04c8\") " pod="default/test-pod-1" Dec 13 03:54:19.651106 kernel: FS-Cache: Loaded Dec 13 03:54:19.766334 kernel: RPC: Registered named UNIX socket transport module. Dec 13 03:54:19.766509 kernel: RPC: Registered udp transport module. Dec 13 03:54:19.766553 kernel: RPC: Registered tcp transport module. Dec 13 03:54:19.766580 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 03:54:19.839910 kubelet[1417]: E1213 03:54:19.839858 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:19.857006 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 03:54:20.078366 kernel: NFS: Registering the id_resolver key type Dec 13 03:54:20.078603 kernel: Key type id_resolver registered Dec 13 03:54:20.078733 kernel: Key type id_legacy registered Dec 13 03:54:20.170138 nfsidmap[2757]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Dec 13 03:54:20.181329 nfsidmap[2758]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Dec 13 03:54:20.374426 env[1143]: time="2024-12-13T03:54:20.373471351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:af963baf-bc8c-461f-83d1-37dc973f04c8,Namespace:default,Attempt:0,}" Dec 13 03:54:20.455942 systemd-networkd[974]: lxc1bb585bba43b: Link UP Dec 13 03:54:20.468056 kernel: eth0: renamed from tmpcc7db Dec 13 03:54:20.477413 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 03:54:20.477577 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1bb585bba43b: link becomes ready Dec 13 03:54:20.477741 systemd-networkd[974]: lxc1bb585bba43b: Gained carrier Dec 13 03:54:20.712776 env[1143]: time="2024-12-13T03:54:20.712546080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:54:20.712776 env[1143]: time="2024-12-13T03:54:20.712669413Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:54:20.713246 env[1143]: time="2024-12-13T03:54:20.712725218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:54:20.713781 env[1143]: time="2024-12-13T03:54:20.713594971Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cc7db9af994e2c1000b3f725ee610813380aa3008946049af27221a5147fc23d pid=2785 runtime=io.containerd.runc.v2 Dec 13 03:54:20.756214 systemd[1]: run-containerd-runc-k8s.io-cc7db9af994e2c1000b3f725ee610813380aa3008946049af27221a5147fc23d-runc.es6bnT.mount: Deactivated successfully. Dec 13 03:54:20.767292 systemd[1]: Started cri-containerd-cc7db9af994e2c1000b3f725ee610813380aa3008946049af27221a5147fc23d.scope. Dec 13 03:54:20.817805 env[1143]: time="2024-12-13T03:54:20.817744632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:af963baf-bc8c-461f-83d1-37dc973f04c8,Namespace:default,Attempt:0,} returns sandbox id \"cc7db9af994e2c1000b3f725ee610813380aa3008946049af27221a5147fc23d\"" Dec 13 03:54:20.819494 env[1143]: time="2024-12-13T03:54:20.819471505Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 03:54:20.840629 kubelet[1417]: E1213 03:54:20.840543 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:21.240605 env[1143]: time="2024-12-13T03:54:21.240523845Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:54:21.243951 env[1143]: time="2024-12-13T03:54:21.243891044Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:54:21.247928 env[1143]: time="2024-12-13T03:54:21.247873515Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:54:21.260706 env[1143]: time="2024-12-13T03:54:21.260620618Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 03:54:21.262245 env[1143]: time="2024-12-13T03:54:21.262188981Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:54:21.270324 env[1143]: time="2024-12-13T03:54:21.270225127Z" level=info msg="CreateContainer within sandbox \"cc7db9af994e2c1000b3f725ee610813380aa3008946049af27221a5147fc23d\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 03:54:21.297156 env[1143]: time="2024-12-13T03:54:21.296999741Z" level=info msg="CreateContainer within sandbox \"cc7db9af994e2c1000b3f725ee610813380aa3008946049af27221a5147fc23d\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"26f67cfe477ffc942102ea778f12a52af059ae1bf3bc548007f80f85b92017a4\"" Dec 13 03:54:21.299383 env[1143]: time="2024-12-13T03:54:21.299220565Z" level=info msg="StartContainer for \"26f67cfe477ffc942102ea778f12a52af059ae1bf3bc548007f80f85b92017a4\"" Dec 13 03:54:21.336450 systemd[1]: Started cri-containerd-26f67cfe477ffc942102ea778f12a52af059ae1bf3bc548007f80f85b92017a4.scope. Dec 13 03:54:21.383673 env[1143]: time="2024-12-13T03:54:21.383570502Z" level=info msg="StartContainer for \"26f67cfe477ffc942102ea778f12a52af059ae1bf3bc548007f80f85b92017a4\" returns successfully" Dec 13 03:54:21.629073 kubelet[1417]: I1213 03:54:21.628776 1417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=21.181172272 podStartE2EDuration="21.628736772s" podCreationTimestamp="2024-12-13 03:54:00 +0000 UTC" firstStartedPulling="2024-12-13 03:54:20.819242431 +0000 UTC m=+75.160118389" lastFinishedPulling="2024-12-13 03:54:21.266806881 +0000 UTC m=+75.607682889" observedRunningTime="2024-12-13 03:54:21.627849586 +0000 UTC m=+75.968725593" watchObservedRunningTime="2024-12-13 03:54:21.628736772 +0000 UTC m=+75.969612779" Dec 13 03:54:21.840759 kubelet[1417]: E1213 03:54:21.840688 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:21.946286 systemd-networkd[974]: lxc1bb585bba43b: Gained IPv6LL Dec 13 03:54:22.842192 kubelet[1417]: E1213 03:54:22.842112 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:23.842757 kubelet[1417]: E1213 03:54:23.842679 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:24.843718 kubelet[1417]: E1213 03:54:24.843649 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:25.845700 kubelet[1417]: E1213 03:54:25.845581 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:26.766214 kubelet[1417]: E1213 03:54:26.766116 1417 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:26.845848 kubelet[1417]: E1213 03:54:26.845772 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:27.846899 kubelet[1417]: E1213 03:54:27.846712 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:28.853748 kubelet[1417]: E1213 03:54:28.853520 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:28.969201 systemd[1]: run-containerd-runc-k8s.io-f5849cfacb4eb07a7652df5a1024d47d8d1a0c74e34fb4cd909c1b5c7117cea7-runc.nhPJ5W.mount: Deactivated successfully. Dec 13 03:54:29.018335 env[1143]: time="2024-12-13T03:54:29.018072617Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 03:54:29.032095 env[1143]: time="2024-12-13T03:54:29.031892345Z" level=info msg="StopContainer for \"f5849cfacb4eb07a7652df5a1024d47d8d1a0c74e34fb4cd909c1b5c7117cea7\" with timeout 2 (s)" Dec 13 03:54:29.032813 env[1143]: time="2024-12-13T03:54:29.032742388Z" level=info msg="Stop container \"f5849cfacb4eb07a7652df5a1024d47d8d1a0c74e34fb4cd909c1b5c7117cea7\" with signal terminated" Dec 13 03:54:29.049169 systemd-networkd[974]: lxc_health: Link DOWN Dec 13 03:54:29.049184 systemd-networkd[974]: lxc_health: Lost carrier Dec 13 03:54:29.103810 systemd[1]: cri-containerd-f5849cfacb4eb07a7652df5a1024d47d8d1a0c74e34fb4cd909c1b5c7117cea7.scope: Deactivated successfully. Dec 13 03:54:29.104449 systemd[1]: cri-containerd-f5849cfacb4eb07a7652df5a1024d47d8d1a0c74e34fb4cd909c1b5c7117cea7.scope: Consumed 8.875s CPU time. Dec 13 03:54:29.138400 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5849cfacb4eb07a7652df5a1024d47d8d1a0c74e34fb4cd909c1b5c7117cea7-rootfs.mount: Deactivated successfully. Dec 13 03:54:29.150141 env[1143]: time="2024-12-13T03:54:29.150077671Z" level=info msg="shim disconnected" id=f5849cfacb4eb07a7652df5a1024d47d8d1a0c74e34fb4cd909c1b5c7117cea7 Dec 13 03:54:29.150390 env[1143]: time="2024-12-13T03:54:29.150368429Z" level=warning msg="cleaning up after shim disconnected" id=f5849cfacb4eb07a7652df5a1024d47d8d1a0c74e34fb4cd909c1b5c7117cea7 namespace=k8s.io Dec 13 03:54:29.150486 env[1143]: time="2024-12-13T03:54:29.150470712Z" level=info msg="cleaning up dead shim" Dec 13 03:54:29.158671 env[1143]: time="2024-12-13T03:54:29.158635307Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:54:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2915 runtime=io.containerd.runc.v2\n" Dec 13 03:54:29.162959 env[1143]: time="2024-12-13T03:54:29.162923753Z" level=info msg="StopContainer for \"f5849cfacb4eb07a7652df5a1024d47d8d1a0c74e34fb4cd909c1b5c7117cea7\" returns successfully" Dec 13 03:54:29.164396 env[1143]: time="2024-12-13T03:54:29.164321368Z" level=info msg="StopPodSandbox for \"908d4d67c4853988c6238ab75f19ade03f612e2a9da7cad9925bd2e9b0feb918\"" Dec 13 03:54:29.164562 env[1143]: time="2024-12-13T03:54:29.164508301Z" level=info msg="Container to stop \"cc29cad59e47ecc8d098ede7dc36228170cac2839a5d145fb1c887e0418f1a96\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 03:54:29.164610 env[1143]: time="2024-12-13T03:54:29.164562303Z" level=info msg="Container to stop \"0e2c8a392f9598731f9eef9f799c892fb7993938b59184533700f3aef0d37836\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 03:54:29.164647 env[1143]: time="2024-12-13T03:54:29.164599293Z" level=info msg="Container to stop \"58c9519d80f60280d23e0e4c11f131f29e8c624e938caa00129ab07de6ffb4ca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 03:54:29.164680 env[1143]: time="2024-12-13T03:54:29.164653615Z" level=info msg="Container to stop \"f5849cfacb4eb07a7652df5a1024d47d8d1a0c74e34fb4cd909c1b5c7117cea7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 03:54:29.164717 env[1143]: time="2024-12-13T03:54:29.164686076Z" level=info msg="Container to stop \"1f4099dc5cabcef8cb107513f2ea1d3683b33d615d35d3629a6e8864ae7bf3ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 03:54:29.166940 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-908d4d67c4853988c6238ab75f19ade03f612e2a9da7cad9925bd2e9b0feb918-shm.mount: Deactivated successfully. Dec 13 03:54:29.176752 systemd[1]: cri-containerd-908d4d67c4853988c6238ab75f19ade03f612e2a9da7cad9925bd2e9b0feb918.scope: Deactivated successfully. Dec 13 03:54:29.206289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-908d4d67c4853988c6238ab75f19ade03f612e2a9da7cad9925bd2e9b0feb918-rootfs.mount: Deactivated successfully. Dec 13 03:54:29.226378 env[1143]: time="2024-12-13T03:54:29.226263950Z" level=info msg="shim disconnected" id=908d4d67c4853988c6238ab75f19ade03f612e2a9da7cad9925bd2e9b0feb918 Dec 13 03:54:29.226378 env[1143]: time="2024-12-13T03:54:29.226381863Z" level=warning msg="cleaning up after shim disconnected" id=908d4d67c4853988c6238ab75f19ade03f612e2a9da7cad9925bd2e9b0feb918 namespace=k8s.io Dec 13 03:54:29.226378 env[1143]: time="2024-12-13T03:54:29.226407631Z" level=info msg="cleaning up dead shim" Dec 13 03:54:29.243956 env[1143]: time="2024-12-13T03:54:29.243880279Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:54:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2946 runtime=io.containerd.runc.v2\n" Dec 13 03:54:29.247484 env[1143]: time="2024-12-13T03:54:29.247422428Z" level=info msg="TearDown network for sandbox \"908d4d67c4853988c6238ab75f19ade03f612e2a9da7cad9925bd2e9b0feb918\" successfully" Dec 13 03:54:29.247769 env[1143]: time="2024-12-13T03:54:29.247690302Z" level=info msg="StopPodSandbox for \"908d4d67c4853988c6238ab75f19ade03f612e2a9da7cad9925bd2e9b0feb918\" returns successfully" Dec 13 03:54:29.379134 kubelet[1417]: I1213 03:54:29.375629 1417 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-bpf-maps\") pod \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\" (UID: \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\") " Dec 13 03:54:29.379134 kubelet[1417]: I1213 03:54:29.375713 1417 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-cilium-cgroup\") pod \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\" (UID: \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\") " Dec 13 03:54:29.379134 kubelet[1417]: I1213 03:54:29.375764 1417 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-lib-modules\") pod \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\" (UID: \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\") " Dec 13 03:54:29.379134 kubelet[1417]: I1213 03:54:29.375818 1417 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-hubble-tls\") pod \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\" (UID: \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\") " Dec 13 03:54:29.379134 kubelet[1417]: I1213 03:54:29.375860 1417 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-etc-cni-netd\") pod \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\" (UID: \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\") " Dec 13 03:54:29.379134 kubelet[1417]: I1213 03:54:29.375902 1417 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-cilium-run\") pod \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\" (UID: \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\") " Dec 13 03:54:29.379888 kubelet[1417]: I1213 03:54:29.375939 1417 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-hostproc\") pod \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\" (UID: \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\") " Dec 13 03:54:29.379888 kubelet[1417]: I1213 03:54:29.376020 1417 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-host-proc-sys-net\") pod \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\" (UID: \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\") " Dec 13 03:54:29.379888 kubelet[1417]: I1213 03:54:29.376069 1417 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-clustermesh-secrets\") pod \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\" (UID: \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\") " Dec 13 03:54:29.379888 kubelet[1417]: I1213 03:54:29.376114 1417 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-cilium-config-path\") pod \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\" (UID: \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\") " Dec 13 03:54:29.379888 kubelet[1417]: I1213 03:54:29.376161 1417 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2kg6\" (UniqueName: \"kubernetes.io/projected/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-kube-api-access-n2kg6\") pod \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\" (UID: \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\") " Dec 13 03:54:29.379888 kubelet[1417]: I1213 03:54:29.376199 1417 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-xtables-lock\") pod \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\" (UID: \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\") " Dec 13 03:54:29.380457 kubelet[1417]: I1213 03:54:29.376245 1417 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-cni-path\") pod \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\" (UID: \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\") " Dec 13 03:54:29.380457 kubelet[1417]: I1213 03:54:29.376288 1417 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-host-proc-sys-kernel\") pod \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\" (UID: \"c4b8afc1-529b-46d6-bcd7-ad54eb092e8d\") " Dec 13 03:54:29.380457 kubelet[1417]: I1213 03:54:29.376437 1417 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c4b8afc1-529b-46d6-bcd7-ad54eb092e8d" (UID: "c4b8afc1-529b-46d6-bcd7-ad54eb092e8d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:29.380457 kubelet[1417]: I1213 03:54:29.376556 1417 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c4b8afc1-529b-46d6-bcd7-ad54eb092e8d" (UID: "c4b8afc1-529b-46d6-bcd7-ad54eb092e8d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:29.380457 kubelet[1417]: I1213 03:54:29.376595 1417 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c4b8afc1-529b-46d6-bcd7-ad54eb092e8d" (UID: "c4b8afc1-529b-46d6-bcd7-ad54eb092e8d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:29.380927 kubelet[1417]: I1213 03:54:29.376632 1417 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c4b8afc1-529b-46d6-bcd7-ad54eb092e8d" (UID: "c4b8afc1-529b-46d6-bcd7-ad54eb092e8d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:29.381694 kubelet[1417]: I1213 03:54:29.381572 1417 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c4b8afc1-529b-46d6-bcd7-ad54eb092e8d" (UID: "c4b8afc1-529b-46d6-bcd7-ad54eb092e8d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:29.382089 kubelet[1417]: I1213 03:54:29.382034 1417 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c4b8afc1-529b-46d6-bcd7-ad54eb092e8d" (UID: "c4b8afc1-529b-46d6-bcd7-ad54eb092e8d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:29.382424 kubelet[1417]: I1213 03:54:29.382385 1417 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-hostproc" (OuterVolumeSpecName: "hostproc") pod "c4b8afc1-529b-46d6-bcd7-ad54eb092e8d" (UID: "c4b8afc1-529b-46d6-bcd7-ad54eb092e8d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:29.382738 kubelet[1417]: I1213 03:54:29.382675 1417 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c4b8afc1-529b-46d6-bcd7-ad54eb092e8d" (UID: "c4b8afc1-529b-46d6-bcd7-ad54eb092e8d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:29.386234 kubelet[1417]: I1213 03:54:29.385211 1417 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c4b8afc1-529b-46d6-bcd7-ad54eb092e8d" (UID: "c4b8afc1-529b-46d6-bcd7-ad54eb092e8d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:29.386488 kubelet[1417]: I1213 03:54:29.386290 1417 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-cni-path" (OuterVolumeSpecName: "cni-path") pod "c4b8afc1-529b-46d6-bcd7-ad54eb092e8d" (UID: "c4b8afc1-529b-46d6-bcd7-ad54eb092e8d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:29.394660 kubelet[1417]: I1213 03:54:29.394575 1417 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c4b8afc1-529b-46d6-bcd7-ad54eb092e8d" (UID: "c4b8afc1-529b-46d6-bcd7-ad54eb092e8d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:54:29.395843 kubelet[1417]: I1213 03:54:29.395783 1417 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-kube-api-access-n2kg6" (OuterVolumeSpecName: "kube-api-access-n2kg6") pod "c4b8afc1-529b-46d6-bcd7-ad54eb092e8d" (UID: "c4b8afc1-529b-46d6-bcd7-ad54eb092e8d"). InnerVolumeSpecName "kube-api-access-n2kg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:54:29.398552 kubelet[1417]: I1213 03:54:29.398506 1417 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c4b8afc1-529b-46d6-bcd7-ad54eb092e8d" (UID: "c4b8afc1-529b-46d6-bcd7-ad54eb092e8d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:54:29.399308 kubelet[1417]: I1213 03:54:29.399233 1417 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c4b8afc1-529b-46d6-bcd7-ad54eb092e8d" (UID: "c4b8afc1-529b-46d6-bcd7-ad54eb092e8d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:54:29.477835 kubelet[1417]: I1213 03:54:29.477662 1417 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-xtables-lock\") on node \"172.24.4.199\" DevicePath \"\"" Dec 13 03:54:29.477835 kubelet[1417]: I1213 03:54:29.477741 1417 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-cni-path\") on node \"172.24.4.199\" DevicePath \"\"" Dec 13 03:54:29.477835 kubelet[1417]: I1213 03:54:29.477769 1417 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-host-proc-sys-kernel\") on node \"172.24.4.199\" DevicePath \"\"" Dec 13 03:54:29.477835 kubelet[1417]: I1213 03:54:29.477798 1417 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-bpf-maps\") on node \"172.24.4.199\" DevicePath \"\"" Dec 13 03:54:29.477835 kubelet[1417]: I1213 03:54:29.477822 1417 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-cilium-cgroup\") on node \"172.24.4.199\" DevicePath \"\"" Dec 13 03:54:29.477835 kubelet[1417]: I1213 03:54:29.477844 1417 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-lib-modules\") on node \"172.24.4.199\" DevicePath \"\"" Dec 13 03:54:29.477835 kubelet[1417]: I1213 03:54:29.477865 1417 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-hubble-tls\") on node \"172.24.4.199\" DevicePath \"\"" Dec 13 03:54:29.477835 kubelet[1417]: I1213 03:54:29.477886 1417 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-etc-cni-netd\") on node \"172.24.4.199\" DevicePath \"\"" Dec 13 03:54:29.479546 kubelet[1417]: I1213 03:54:29.477906 1417 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-cilium-run\") on node \"172.24.4.199\" DevicePath \"\"" Dec 13 03:54:29.479546 kubelet[1417]: I1213 03:54:29.477930 1417 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-hostproc\") on node \"172.24.4.199\" DevicePath \"\"" Dec 13 03:54:29.479546 kubelet[1417]: I1213 03:54:29.477951 1417 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-host-proc-sys-net\") on node \"172.24.4.199\" DevicePath \"\"" Dec 13 03:54:29.479546 kubelet[1417]: I1213 03:54:29.478019 1417 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-clustermesh-secrets\") on node \"172.24.4.199\" DevicePath \"\"" Dec 13 03:54:29.479546 kubelet[1417]: I1213 03:54:29.478045 1417 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-cilium-config-path\") on node \"172.24.4.199\" DevicePath \"\"" Dec 13 03:54:29.479546 kubelet[1417]: I1213 03:54:29.478068 1417 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-n2kg6\" (UniqueName: \"kubernetes.io/projected/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d-kube-api-access-n2kg6\") on node \"172.24.4.199\" DevicePath \"\"" Dec 13 03:54:29.643398 kubelet[1417]: I1213 03:54:29.643170 1417 scope.go:117] "RemoveContainer" containerID="f5849cfacb4eb07a7652df5a1024d47d8d1a0c74e34fb4cd909c1b5c7117cea7" Dec 13 03:54:29.650039 env[1143]: time="2024-12-13T03:54:29.649757086Z" level=info msg="RemoveContainer for \"f5849cfacb4eb07a7652df5a1024d47d8d1a0c74e34fb4cd909c1b5c7117cea7\"" Dec 13 03:54:29.658043 env[1143]: time="2024-12-13T03:54:29.657534671Z" level=info msg="RemoveContainer for \"f5849cfacb4eb07a7652df5a1024d47d8d1a0c74e34fb4cd909c1b5c7117cea7\" returns successfully" Dec 13 03:54:29.659884 kubelet[1417]: I1213 03:54:29.659843 1417 scope.go:117] "RemoveContainer" containerID="cc29cad59e47ecc8d098ede7dc36228170cac2839a5d145fb1c887e0418f1a96" Dec 13 03:54:29.660641 systemd[1]: Removed slice kubepods-burstable-podc4b8afc1_529b_46d6_bcd7_ad54eb092e8d.slice. Dec 13 03:54:29.660889 systemd[1]: kubepods-burstable-podc4b8afc1_529b_46d6_bcd7_ad54eb092e8d.slice: Consumed 9.007s CPU time. Dec 13 03:54:29.666823 env[1143]: time="2024-12-13T03:54:29.666204540Z" level=info msg="RemoveContainer for \"cc29cad59e47ecc8d098ede7dc36228170cac2839a5d145fb1c887e0418f1a96\"" Dec 13 03:54:29.672157 env[1143]: time="2024-12-13T03:54:29.672088274Z" level=info msg="RemoveContainer for \"cc29cad59e47ecc8d098ede7dc36228170cac2839a5d145fb1c887e0418f1a96\" returns successfully" Dec 13 03:54:29.674254 kubelet[1417]: I1213 03:54:29.674210 1417 scope.go:117] "RemoveContainer" containerID="1f4099dc5cabcef8cb107513f2ea1d3683b33d615d35d3629a6e8864ae7bf3ce" Dec 13 03:54:29.677265 env[1143]: time="2024-12-13T03:54:29.677142394Z" level=info msg="RemoveContainer for \"1f4099dc5cabcef8cb107513f2ea1d3683b33d615d35d3629a6e8864ae7bf3ce\"" Dec 13 03:54:29.682424 env[1143]: time="2024-12-13T03:54:29.682335567Z" level=info msg="RemoveContainer for \"1f4099dc5cabcef8cb107513f2ea1d3683b33d615d35d3629a6e8864ae7bf3ce\" returns successfully" Dec 13 03:54:29.683040 kubelet[1417]: I1213 03:54:29.682922 1417 scope.go:117] "RemoveContainer" containerID="58c9519d80f60280d23e0e4c11f131f29e8c624e938caa00129ab07de6ffb4ca" Dec 13 03:54:29.689714 env[1143]: time="2024-12-13T03:54:29.689069765Z" level=info msg="RemoveContainer for \"58c9519d80f60280d23e0e4c11f131f29e8c624e938caa00129ab07de6ffb4ca\"" Dec 13 03:54:29.695965 env[1143]: time="2024-12-13T03:54:29.695903131Z" level=info msg="RemoveContainer for \"58c9519d80f60280d23e0e4c11f131f29e8c624e938caa00129ab07de6ffb4ca\" returns successfully" Dec 13 03:54:29.697036 kubelet[1417]: I1213 03:54:29.696899 1417 scope.go:117] "RemoveContainer" containerID="0e2c8a392f9598731f9eef9f799c892fb7993938b59184533700f3aef0d37836" Dec 13 03:54:29.700919 env[1143]: time="2024-12-13T03:54:29.700799603Z" level=info msg="RemoveContainer for \"0e2c8a392f9598731f9eef9f799c892fb7993938b59184533700f3aef0d37836\"" Dec 13 03:54:29.707301 env[1143]: time="2024-12-13T03:54:29.707199371Z" level=info msg="RemoveContainer for \"0e2c8a392f9598731f9eef9f799c892fb7993938b59184533700f3aef0d37836\" returns successfully" Dec 13 03:54:29.707733 kubelet[1417]: I1213 03:54:29.707692 1417 scope.go:117] "RemoveContainer" containerID="f5849cfacb4eb07a7652df5a1024d47d8d1a0c74e34fb4cd909c1b5c7117cea7" Dec 13 03:54:29.708608 env[1143]: time="2024-12-13T03:54:29.708377492Z" level=error msg="ContainerStatus for \"f5849cfacb4eb07a7652df5a1024d47d8d1a0c74e34fb4cd909c1b5c7117cea7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f5849cfacb4eb07a7652df5a1024d47d8d1a0c74e34fb4cd909c1b5c7117cea7\": not found" Dec 13 03:54:29.709206 kubelet[1417]: E1213 03:54:29.709087 1417 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f5849cfacb4eb07a7652df5a1024d47d8d1a0c74e34fb4cd909c1b5c7117cea7\": not found" containerID="f5849cfacb4eb07a7652df5a1024d47d8d1a0c74e34fb4cd909c1b5c7117cea7" Dec 13 03:54:29.709593 kubelet[1417]: I1213 03:54:29.709257 1417 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f5849cfacb4eb07a7652df5a1024d47d8d1a0c74e34fb4cd909c1b5c7117cea7"} err="failed to get container status \"f5849cfacb4eb07a7652df5a1024d47d8d1a0c74e34fb4cd909c1b5c7117cea7\": rpc error: code = NotFound desc = an error occurred when try to find container \"f5849cfacb4eb07a7652df5a1024d47d8d1a0c74e34fb4cd909c1b5c7117cea7\": not found" Dec 13 03:54:29.709716 kubelet[1417]: I1213 03:54:29.709631 1417 scope.go:117] "RemoveContainer" containerID="cc29cad59e47ecc8d098ede7dc36228170cac2839a5d145fb1c887e0418f1a96" Dec 13 03:54:29.710518 env[1143]: time="2024-12-13T03:54:29.710288827Z" level=error msg="ContainerStatus for \"cc29cad59e47ecc8d098ede7dc36228170cac2839a5d145fb1c887e0418f1a96\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cc29cad59e47ecc8d098ede7dc36228170cac2839a5d145fb1c887e0418f1a96\": not found" Dec 13 03:54:29.710753 kubelet[1417]: E1213 03:54:29.710694 1417 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cc29cad59e47ecc8d098ede7dc36228170cac2839a5d145fb1c887e0418f1a96\": not found" containerID="cc29cad59e47ecc8d098ede7dc36228170cac2839a5d145fb1c887e0418f1a96" Dec 13 03:54:29.710881 kubelet[1417]: I1213 03:54:29.710798 1417 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cc29cad59e47ecc8d098ede7dc36228170cac2839a5d145fb1c887e0418f1a96"} err="failed to get container status \"cc29cad59e47ecc8d098ede7dc36228170cac2839a5d145fb1c887e0418f1a96\": rpc error: code = NotFound desc = an error occurred when try to find container \"cc29cad59e47ecc8d098ede7dc36228170cac2839a5d145fb1c887e0418f1a96\": not found" Dec 13 03:54:29.711018 kubelet[1417]: I1213 03:54:29.710880 1417 scope.go:117] "RemoveContainer" containerID="1f4099dc5cabcef8cb107513f2ea1d3683b33d615d35d3629a6e8864ae7bf3ce" Dec 13 03:54:29.711957 env[1143]: time="2024-12-13T03:54:29.711765120Z" level=error msg="ContainerStatus for \"1f4099dc5cabcef8cb107513f2ea1d3683b33d615d35d3629a6e8864ae7bf3ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1f4099dc5cabcef8cb107513f2ea1d3683b33d615d35d3629a6e8864ae7bf3ce\": not found" Dec 13 03:54:29.712508 kubelet[1417]: E1213 03:54:29.712378 1417 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f4099dc5cabcef8cb107513f2ea1d3683b33d615d35d3629a6e8864ae7bf3ce\": not found" containerID="1f4099dc5cabcef8cb107513f2ea1d3683b33d615d35d3629a6e8864ae7bf3ce" Dec 13 03:54:29.712636 kubelet[1417]: I1213 03:54:29.712507 1417 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1f4099dc5cabcef8cb107513f2ea1d3683b33d615d35d3629a6e8864ae7bf3ce"} err="failed to get container status \"1f4099dc5cabcef8cb107513f2ea1d3683b33d615d35d3629a6e8864ae7bf3ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f4099dc5cabcef8cb107513f2ea1d3683b33d615d35d3629a6e8864ae7bf3ce\": not found" Dec 13 03:54:29.712636 kubelet[1417]: I1213 03:54:29.712595 1417 scope.go:117] "RemoveContainer" containerID="58c9519d80f60280d23e0e4c11f131f29e8c624e938caa00129ab07de6ffb4ca" Dec 13 03:54:29.713584 env[1143]: time="2024-12-13T03:54:29.713354888Z" level=error msg="ContainerStatus for \"58c9519d80f60280d23e0e4c11f131f29e8c624e938caa00129ab07de6ffb4ca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"58c9519d80f60280d23e0e4c11f131f29e8c624e938caa00129ab07de6ffb4ca\": not found" Dec 13 03:54:29.713923 kubelet[1417]: E1213 03:54:29.713874 1417 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"58c9519d80f60280d23e0e4c11f131f29e8c624e938caa00129ab07de6ffb4ca\": not found" containerID="58c9519d80f60280d23e0e4c11f131f29e8c624e938caa00129ab07de6ffb4ca" Dec 13 03:54:29.714213 kubelet[1417]: I1213 03:54:29.714149 1417 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"58c9519d80f60280d23e0e4c11f131f29e8c624e938caa00129ab07de6ffb4ca"} err="failed to get container status \"58c9519d80f60280d23e0e4c11f131f29e8c624e938caa00129ab07de6ffb4ca\": rpc error: code = NotFound desc = an error occurred when try to find container \"58c9519d80f60280d23e0e4c11f131f29e8c624e938caa00129ab07de6ffb4ca\": not found" Dec 13 03:54:29.714383 kubelet[1417]: I1213 03:54:29.714356 1417 scope.go:117] "RemoveContainer" containerID="0e2c8a392f9598731f9eef9f799c892fb7993938b59184533700f3aef0d37836" Dec 13 03:54:29.715806 env[1143]: time="2024-12-13T03:54:29.715589171Z" level=error msg="ContainerStatus for \"0e2c8a392f9598731f9eef9f799c892fb7993938b59184533700f3aef0d37836\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0e2c8a392f9598731f9eef9f799c892fb7993938b59184533700f3aef0d37836\": not found" Dec 13 03:54:29.716769 kubelet[1417]: E1213 03:54:29.716722 1417 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0e2c8a392f9598731f9eef9f799c892fb7993938b59184533700f3aef0d37836\": not found" containerID="0e2c8a392f9598731f9eef9f799c892fb7993938b59184533700f3aef0d37836" Dec 13 03:54:29.717162 kubelet[1417]: I1213 03:54:29.717048 1417 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0e2c8a392f9598731f9eef9f799c892fb7993938b59184533700f3aef0d37836"} err="failed to get container status \"0e2c8a392f9598731f9eef9f799c892fb7993938b59184533700f3aef0d37836\": rpc error: code = NotFound desc = an error occurred when try to find container \"0e2c8a392f9598731f9eef9f799c892fb7993938b59184533700f3aef0d37836\": not found" Dec 13 03:54:29.855528 kubelet[1417]: E1213 03:54:29.855421 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:29.958459 systemd[1]: var-lib-kubelet-pods-c4b8afc1\x2d529b\x2d46d6\x2dbcd7\x2dad54eb092e8d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn2kg6.mount: Deactivated successfully. Dec 13 03:54:29.958681 systemd[1]: var-lib-kubelet-pods-c4b8afc1\x2d529b\x2d46d6\x2dbcd7\x2dad54eb092e8d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 03:54:29.958817 systemd[1]: var-lib-kubelet-pods-c4b8afc1\x2d529b\x2d46d6\x2dbcd7\x2dad54eb092e8d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 03:54:30.856808 kubelet[1417]: E1213 03:54:30.856533 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:31.077312 kubelet[1417]: I1213 03:54:31.077251 1417 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4b8afc1-529b-46d6-bcd7-ad54eb092e8d" path="/var/lib/kubelet/pods/c4b8afc1-529b-46d6-bcd7-ad54eb092e8d/volumes" Dec 13 03:54:31.857613 kubelet[1417]: E1213 03:54:31.857552 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:32.030882 kubelet[1417]: E1213 03:54:32.030773 1417 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 03:54:32.858907 kubelet[1417]: E1213 03:54:32.858818 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:33.859615 kubelet[1417]: E1213 03:54:33.859544 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:34.725845 kubelet[1417]: I1213 03:54:34.725695 1417 topology_manager.go:215] "Topology Admit Handler" podUID="9f28f501-433b-40df-98bb-4759c7b323ed" podNamespace="kube-system" podName="cilium-operator-599987898-b4kvs" Dec 13 03:54:34.726222 kubelet[1417]: E1213 03:54:34.725844 1417 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c4b8afc1-529b-46d6-bcd7-ad54eb092e8d" containerName="mount-bpf-fs" Dec 13 03:54:34.726222 kubelet[1417]: E1213 03:54:34.725908 1417 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c4b8afc1-529b-46d6-bcd7-ad54eb092e8d" containerName="clean-cilium-state" Dec 13 03:54:34.726222 kubelet[1417]: E1213 03:54:34.725926 1417 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c4b8afc1-529b-46d6-bcd7-ad54eb092e8d" containerName="mount-cgroup" Dec 13 03:54:34.726222 kubelet[1417]: E1213 03:54:34.725941 1417 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c4b8afc1-529b-46d6-bcd7-ad54eb092e8d" containerName="apply-sysctl-overwrites" Dec 13 03:54:34.726222 kubelet[1417]: E1213 03:54:34.725956 1417 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c4b8afc1-529b-46d6-bcd7-ad54eb092e8d" containerName="cilium-agent" Dec 13 03:54:34.726222 kubelet[1417]: I1213 03:54:34.726045 1417 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b8afc1-529b-46d6-bcd7-ad54eb092e8d" containerName="cilium-agent" Dec 13 03:54:34.739176 systemd[1]: Created slice kubepods-besteffort-pod9f28f501_433b_40df_98bb_4759c7b323ed.slice. Dec 13 03:54:34.740179 kubelet[1417]: I1213 03:54:34.740049 1417 topology_manager.go:215] "Topology Admit Handler" podUID="8275add1-72e9-459f-81e0-2aa41f024067" podNamespace="kube-system" podName="cilium-6pz44" Dec 13 03:54:34.756317 systemd[1]: Created slice kubepods-burstable-pod8275add1_72e9_459f_81e0_2aa41f024067.slice. Dec 13 03:54:34.860395 kubelet[1417]: E1213 03:54:34.860295 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:34.912495 kubelet[1417]: I1213 03:54:34.912449 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-hostproc\") pod \"cilium-6pz44\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " pod="kube-system/cilium-6pz44" Dec 13 03:54:34.912892 kubelet[1417]: I1213 03:54:34.912856 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-bpf-maps\") pod \"cilium-6pz44\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " pod="kube-system/cilium-6pz44" Dec 13 03:54:34.913207 kubelet[1417]: I1213 03:54:34.913163 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-etc-cni-netd\") pod \"cilium-6pz44\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " pod="kube-system/cilium-6pz44" Dec 13 03:54:34.913483 kubelet[1417]: I1213 03:54:34.913449 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-lib-modules\") pod \"cilium-6pz44\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " pod="kube-system/cilium-6pz44" Dec 13 03:54:34.913711 kubelet[1417]: I1213 03:54:34.913677 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8275add1-72e9-459f-81e0-2aa41f024067-hubble-tls\") pod \"cilium-6pz44\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " pod="kube-system/cilium-6pz44" Dec 13 03:54:34.913934 kubelet[1417]: I1213 03:54:34.913894 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm62f\" (UniqueName: \"kubernetes.io/projected/8275add1-72e9-459f-81e0-2aa41f024067-kube-api-access-cm62f\") pod \"cilium-6pz44\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " pod="kube-system/cilium-6pz44" Dec 13 03:54:34.914200 kubelet[1417]: I1213 03:54:34.914165 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-cilium-run\") pod \"cilium-6pz44\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " pod="kube-system/cilium-6pz44" Dec 13 03:54:34.914415 kubelet[1417]: I1213 03:54:34.914381 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-host-proc-sys-net\") pod \"cilium-6pz44\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " pod="kube-system/cilium-6pz44" Dec 13 03:54:34.914717 kubelet[1417]: I1213 03:54:34.914680 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f28f501-433b-40df-98bb-4759c7b323ed-cilium-config-path\") pod \"cilium-operator-599987898-b4kvs\" (UID: \"9f28f501-433b-40df-98bb-4759c7b323ed\") " pod="kube-system/cilium-operator-599987898-b4kvs" Dec 13 03:54:34.914935 kubelet[1417]: I1213 03:54:34.914895 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brbp7\" (UniqueName: \"kubernetes.io/projected/9f28f501-433b-40df-98bb-4759c7b323ed-kube-api-access-brbp7\") pod \"cilium-operator-599987898-b4kvs\" (UID: \"9f28f501-433b-40df-98bb-4759c7b323ed\") " pod="kube-system/cilium-operator-599987898-b4kvs" Dec 13 03:54:34.915203 kubelet[1417]: I1213 03:54:34.915168 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-cilium-cgroup\") pod \"cilium-6pz44\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " pod="kube-system/cilium-6pz44" Dec 13 03:54:34.915404 kubelet[1417]: I1213 03:54:34.915372 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-cni-path\") pod \"cilium-6pz44\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " pod="kube-system/cilium-6pz44" Dec 13 03:54:34.915610 kubelet[1417]: I1213 03:54:34.915577 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-xtables-lock\") pod \"cilium-6pz44\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " pod="kube-system/cilium-6pz44" Dec 13 03:54:34.915873 kubelet[1417]: I1213 03:54:34.915833 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8275add1-72e9-459f-81e0-2aa41f024067-clustermesh-secrets\") pod \"cilium-6pz44\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " pod="kube-system/cilium-6pz44" Dec 13 03:54:34.916135 kubelet[1417]: I1213 03:54:34.916095 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8275add1-72e9-459f-81e0-2aa41f024067-cilium-config-path\") pod \"cilium-6pz44\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " pod="kube-system/cilium-6pz44" Dec 13 03:54:34.916359 kubelet[1417]: I1213 03:54:34.916324 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8275add1-72e9-459f-81e0-2aa41f024067-cilium-ipsec-secrets\") pod \"cilium-6pz44\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " pod="kube-system/cilium-6pz44" Dec 13 03:54:34.916587 kubelet[1417]: I1213 03:54:34.916552 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-host-proc-sys-kernel\") pod \"cilium-6pz44\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " pod="kube-system/cilium-6pz44" Dec 13 03:54:35.351707 env[1143]: time="2024-12-13T03:54:35.351598952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-b4kvs,Uid:9f28f501-433b-40df-98bb-4759c7b323ed,Namespace:kube-system,Attempt:0,}" Dec 13 03:54:35.371320 env[1143]: time="2024-12-13T03:54:35.371230857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6pz44,Uid:8275add1-72e9-459f-81e0-2aa41f024067,Namespace:kube-system,Attempt:0,}" Dec 13 03:54:35.386231 env[1143]: time="2024-12-13T03:54:35.385958222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:54:35.386543 env[1143]: time="2024-12-13T03:54:35.386159020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:54:35.386543 env[1143]: time="2024-12-13T03:54:35.386241675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:54:35.386958 env[1143]: time="2024-12-13T03:54:35.386780862Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8ecb8907b93a5b573049dfa7bfa38f7bf0b45da9640b7b455b610b97a5875fbb pid=2977 runtime=io.containerd.runc.v2 Dec 13 03:54:35.415502 systemd[1]: Started cri-containerd-8ecb8907b93a5b573049dfa7bfa38f7bf0b45da9640b7b455b610b97a5875fbb.scope. Dec 13 03:54:35.435473 env[1143]: time="2024-12-13T03:54:35.427603408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:54:35.435473 env[1143]: time="2024-12-13T03:54:35.427690141Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:54:35.435473 env[1143]: time="2024-12-13T03:54:35.427718384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:54:35.435473 env[1143]: time="2024-12-13T03:54:35.429657488Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/81e91a1d1a83b2dc382f2478bd0c536c9a0061c6c6c5ce46e4d8734b7816666e pid=3001 runtime=io.containerd.runc.v2 Dec 13 03:54:35.463294 systemd[1]: Started cri-containerd-81e91a1d1a83b2dc382f2478bd0c536c9a0061c6c6c5ce46e4d8734b7816666e.scope. Dec 13 03:54:35.505096 env[1143]: time="2024-12-13T03:54:35.505014384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6pz44,Uid:8275add1-72e9-459f-81e0-2aa41f024067,Namespace:kube-system,Attempt:0,} returns sandbox id \"81e91a1d1a83b2dc382f2478bd0c536c9a0061c6c6c5ce46e4d8734b7816666e\"" Dec 13 03:54:35.509823 env[1143]: time="2024-12-13T03:54:35.509703988Z" level=info msg="CreateContainer within sandbox \"81e91a1d1a83b2dc382f2478bd0c536c9a0061c6c6c5ce46e4d8734b7816666e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 03:54:35.546775 env[1143]: time="2024-12-13T03:54:35.546691379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-b4kvs,Uid:9f28f501-433b-40df-98bb-4759c7b323ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ecb8907b93a5b573049dfa7bfa38f7bf0b45da9640b7b455b610b97a5875fbb\"" Dec 13 03:54:35.548710 env[1143]: time="2024-12-13T03:54:35.548647525Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 03:54:35.554356 env[1143]: time="2024-12-13T03:54:35.554308870Z" level=info msg="CreateContainer within sandbox \"81e91a1d1a83b2dc382f2478bd0c536c9a0061c6c6c5ce46e4d8734b7816666e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5dba487ab00f514b2a04157ae3cc55fe37c5c15863e4c8aad78d514bb665ff91\"" Dec 13 03:54:35.554859 env[1143]: time="2024-12-13T03:54:35.554836604Z" level=info msg="StartContainer for \"5dba487ab00f514b2a04157ae3cc55fe37c5c15863e4c8aad78d514bb665ff91\"" Dec 13 03:54:35.572895 systemd[1]: Started cri-containerd-5dba487ab00f514b2a04157ae3cc55fe37c5c15863e4c8aad78d514bb665ff91.scope. Dec 13 03:54:35.588643 systemd[1]: cri-containerd-5dba487ab00f514b2a04157ae3cc55fe37c5c15863e4c8aad78d514bb665ff91.scope: Deactivated successfully. Dec 13 03:54:35.614089 env[1143]: time="2024-12-13T03:54:35.612840950Z" level=info msg="shim disconnected" id=5dba487ab00f514b2a04157ae3cc55fe37c5c15863e4c8aad78d514bb665ff91 Dec 13 03:54:35.614089 env[1143]: time="2024-12-13T03:54:35.612903156Z" level=warning msg="cleaning up after shim disconnected" id=5dba487ab00f514b2a04157ae3cc55fe37c5c15863e4c8aad78d514bb665ff91 namespace=k8s.io Dec 13 03:54:35.614089 env[1143]: time="2024-12-13T03:54:35.612915439Z" level=info msg="cleaning up dead shim" Dec 13 03:54:35.621870 env[1143]: time="2024-12-13T03:54:35.621800869Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:54:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3081 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T03:54:35Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/5dba487ab00f514b2a04157ae3cc55fe37c5c15863e4c8aad78d514bb665ff91/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 03:54:35.622280 env[1143]: time="2024-12-13T03:54:35.622158833Z" level=error msg="copy shim log" error="read /proc/self/fd/66: file already closed" Dec 13 03:54:35.625389 env[1143]: time="2024-12-13T03:54:35.625232384Z" level=error msg="Failed to pipe stderr of container \"5dba487ab00f514b2a04157ae3cc55fe37c5c15863e4c8aad78d514bb665ff91\"" error="reading from a closed fifo" Dec 13 03:54:35.625389 env[1143]: time="2024-12-13T03:54:35.625240910Z" level=error msg="Failed to pipe stdout of container \"5dba487ab00f514b2a04157ae3cc55fe37c5c15863e4c8aad78d514bb665ff91\"" error="reading from a closed fifo" Dec 13 03:54:35.633554 env[1143]: time="2024-12-13T03:54:35.633493517Z" level=error msg="StartContainer for \"5dba487ab00f514b2a04157ae3cc55fe37c5c15863e4c8aad78d514bb665ff91\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 03:54:35.633950 kubelet[1417]: E1213 03:54:35.633884 1417 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="5dba487ab00f514b2a04157ae3cc55fe37c5c15863e4c8aad78d514bb665ff91" Dec 13 03:54:35.634173 kubelet[1417]: E1213 03:54:35.634150 1417 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 03:54:35.634173 kubelet[1417]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 03:54:35.634173 kubelet[1417]: rm /hostbin/cilium-mount Dec 13 03:54:35.634274 kubelet[1417]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cm62f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-6pz44_kube-system(8275add1-72e9-459f-81e0-2aa41f024067): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 03:54:35.634274 kubelet[1417]: E1213 03:54:35.634192 1417 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-6pz44" podUID="8275add1-72e9-459f-81e0-2aa41f024067" Dec 13 03:54:35.673918 env[1143]: time="2024-12-13T03:54:35.673825760Z" level=info msg="CreateContainer within sandbox \"81e91a1d1a83b2dc382f2478bd0c536c9a0061c6c6c5ce46e4d8734b7816666e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Dec 13 03:54:35.692196 env[1143]: time="2024-12-13T03:54:35.692065752Z" level=info msg="CreateContainer within sandbox \"81e91a1d1a83b2dc382f2478bd0c536c9a0061c6c6c5ce46e4d8734b7816666e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"701335312af150b09f9025206167003b51ecc170b1034637d4dd4b065c86a75b\"" Dec 13 03:54:35.694724 env[1143]: time="2024-12-13T03:54:35.694672814Z" level=info msg="StartContainer for \"701335312af150b09f9025206167003b51ecc170b1034637d4dd4b065c86a75b\"" Dec 13 03:54:35.723935 systemd[1]: Started cri-containerd-701335312af150b09f9025206167003b51ecc170b1034637d4dd4b065c86a75b.scope. Dec 13 03:54:35.753747 systemd[1]: cri-containerd-701335312af150b09f9025206167003b51ecc170b1034637d4dd4b065c86a75b.scope: Deactivated successfully. Dec 13 03:54:35.767661 env[1143]: time="2024-12-13T03:54:35.767507017Z" level=info msg="shim disconnected" id=701335312af150b09f9025206167003b51ecc170b1034637d4dd4b065c86a75b Dec 13 03:54:35.767943 env[1143]: time="2024-12-13T03:54:35.767706513Z" level=warning msg="cleaning up after shim disconnected" id=701335312af150b09f9025206167003b51ecc170b1034637d4dd4b065c86a75b namespace=k8s.io Dec 13 03:54:35.767943 env[1143]: time="2024-12-13T03:54:35.767738083Z" level=info msg="cleaning up dead shim" Dec 13 03:54:35.777298 env[1143]: time="2024-12-13T03:54:35.777210778Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:54:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3119 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T03:54:35Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/701335312af150b09f9025206167003b51ecc170b1034637d4dd4b065c86a75b/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 03:54:35.777846 env[1143]: time="2024-12-13T03:54:35.777748882Z" level=error msg="copy shim log" error="read /proc/self/fd/66: file already closed" Dec 13 03:54:35.781098 env[1143]: time="2024-12-13T03:54:35.781038820Z" level=error msg="Failed to pipe stdout of container \"701335312af150b09f9025206167003b51ecc170b1034637d4dd4b065c86a75b\"" error="reading from a closed fifo" Dec 13 03:54:35.781259 env[1143]: time="2024-12-13T03:54:35.781227014Z" level=error msg="Failed to pipe stderr of container \"701335312af150b09f9025206167003b51ecc170b1034637d4dd4b065c86a75b\"" error="reading from a closed fifo" Dec 13 03:54:35.785750 env[1143]: time="2024-12-13T03:54:35.785704219Z" level=error msg="StartContainer for \"701335312af150b09f9025206167003b51ecc170b1034637d4dd4b065c86a75b\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 03:54:35.786459 kubelet[1417]: E1213 03:54:35.786124 1417 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="701335312af150b09f9025206167003b51ecc170b1034637d4dd4b065c86a75b" Dec 13 03:54:35.786459 kubelet[1417]: E1213 03:54:35.786353 1417 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 03:54:35.786459 kubelet[1417]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 03:54:35.786459 kubelet[1417]: rm /hostbin/cilium-mount Dec 13 03:54:35.786459 kubelet[1417]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cm62f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-6pz44_kube-system(8275add1-72e9-459f-81e0-2aa41f024067): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 03:54:35.786459 kubelet[1417]: E1213 03:54:35.786411 1417 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-6pz44" podUID="8275add1-72e9-459f-81e0-2aa41f024067" Dec 13 03:54:35.861288 kubelet[1417]: E1213 03:54:35.861171 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:36.676095 kubelet[1417]: I1213 03:54:36.675942 1417 scope.go:117] "RemoveContainer" containerID="5dba487ab00f514b2a04157ae3cc55fe37c5c15863e4c8aad78d514bb665ff91" Dec 13 03:54:36.686210 env[1143]: time="2024-12-13T03:54:36.677267804Z" level=info msg="StopPodSandbox for \"81e91a1d1a83b2dc382f2478bd0c536c9a0061c6c6c5ce46e4d8734b7816666e\"" Dec 13 03:54:36.686210 env[1143]: time="2024-12-13T03:54:36.677405464Z" level=info msg="Container to stop \"5dba487ab00f514b2a04157ae3cc55fe37c5c15863e4c8aad78d514bb665ff91\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 03:54:36.686210 env[1143]: time="2024-12-13T03:54:36.677445800Z" level=info msg="Container to stop \"701335312af150b09f9025206167003b51ecc170b1034637d4dd4b065c86a75b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 03:54:36.683679 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-81e91a1d1a83b2dc382f2478bd0c536c9a0061c6c6c5ce46e4d8734b7816666e-shm.mount: Deactivated successfully. Dec 13 03:54:36.692278 env[1143]: time="2024-12-13T03:54:36.692175145Z" level=info msg="RemoveContainer for \"5dba487ab00f514b2a04157ae3cc55fe37c5c15863e4c8aad78d514bb665ff91\"" Dec 13 03:54:36.699276 env[1143]: time="2024-12-13T03:54:36.699138772Z" level=info msg="RemoveContainer for \"5dba487ab00f514b2a04157ae3cc55fe37c5c15863e4c8aad78d514bb665ff91\" returns successfully" Dec 13 03:54:36.710509 systemd[1]: cri-containerd-81e91a1d1a83b2dc382f2478bd0c536c9a0061c6c6c5ce46e4d8734b7816666e.scope: Deactivated successfully. Dec 13 03:54:36.770265 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81e91a1d1a83b2dc382f2478bd0c536c9a0061c6c6c5ce46e4d8734b7816666e-rootfs.mount: Deactivated successfully. Dec 13 03:54:36.776307 env[1143]: time="2024-12-13T03:54:36.776183398Z" level=info msg="shim disconnected" id=81e91a1d1a83b2dc382f2478bd0c536c9a0061c6c6c5ce46e4d8734b7816666e Dec 13 03:54:36.777250 env[1143]: time="2024-12-13T03:54:36.777157784Z" level=warning msg="cleaning up after shim disconnected" id=81e91a1d1a83b2dc382f2478bd0c536c9a0061c6c6c5ce46e4d8734b7816666e namespace=k8s.io Dec 13 03:54:36.777250 env[1143]: time="2024-12-13T03:54:36.777177871Z" level=info msg="cleaning up dead shim" Dec 13 03:54:36.790508 env[1143]: time="2024-12-13T03:54:36.790424643Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:54:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3151 runtime=io.containerd.runc.v2\n" Dec 13 03:54:36.790843 env[1143]: time="2024-12-13T03:54:36.790804379Z" level=info msg="TearDown network for sandbox \"81e91a1d1a83b2dc382f2478bd0c536c9a0061c6c6c5ce46e4d8734b7816666e\" successfully" Dec 13 03:54:36.790843 env[1143]: time="2024-12-13T03:54:36.790838493Z" level=info msg="StopPodSandbox for \"81e91a1d1a83b2dc382f2478bd0c536c9a0061c6c6c5ce46e4d8734b7816666e\" returns successfully" Dec 13 03:54:36.836998 kubelet[1417]: I1213 03:54:36.836944 1417 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-lib-modules\") pod \"8275add1-72e9-459f-81e0-2aa41f024067\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " Dec 13 03:54:36.837247 kubelet[1417]: I1213 03:54:36.837004 1417 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-host-proc-sys-net\") pod \"8275add1-72e9-459f-81e0-2aa41f024067\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " Dec 13 03:54:36.837247 kubelet[1417]: I1213 03:54:36.837040 1417 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8275add1-72e9-459f-81e0-2aa41f024067-cilium-ipsec-secrets\") pod \"8275add1-72e9-459f-81e0-2aa41f024067\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " Dec 13 03:54:36.837247 kubelet[1417]: I1213 03:54:36.837065 1417 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8275add1-72e9-459f-81e0-2aa41f024067-cilium-config-path\") pod \"8275add1-72e9-459f-81e0-2aa41f024067\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " Dec 13 03:54:36.837247 kubelet[1417]: I1213 03:54:36.837089 1417 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-hostproc\") pod \"8275add1-72e9-459f-81e0-2aa41f024067\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " Dec 13 03:54:36.837247 kubelet[1417]: I1213 03:54:36.837112 1417 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-cni-path\") pod \"8275add1-72e9-459f-81e0-2aa41f024067\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " Dec 13 03:54:36.837247 kubelet[1417]: I1213 03:54:36.837136 1417 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-xtables-lock\") pod \"8275add1-72e9-459f-81e0-2aa41f024067\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " Dec 13 03:54:36.837247 kubelet[1417]: I1213 03:54:36.837161 1417 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-host-proc-sys-kernel\") pod \"8275add1-72e9-459f-81e0-2aa41f024067\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " Dec 13 03:54:36.837247 kubelet[1417]: I1213 03:54:36.837188 1417 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-etc-cni-netd\") pod \"8275add1-72e9-459f-81e0-2aa41f024067\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " Dec 13 03:54:36.837247 kubelet[1417]: I1213 03:54:36.837221 1417 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8275add1-72e9-459f-81e0-2aa41f024067-hubble-tls\") pod \"8275add1-72e9-459f-81e0-2aa41f024067\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " Dec 13 03:54:36.837247 kubelet[1417]: I1213 03:54:36.837244 1417 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-cilium-cgroup\") pod \"8275add1-72e9-459f-81e0-2aa41f024067\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " Dec 13 03:54:36.837730 kubelet[1417]: I1213 03:54:36.837273 1417 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8275add1-72e9-459f-81e0-2aa41f024067-clustermesh-secrets\") pod \"8275add1-72e9-459f-81e0-2aa41f024067\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " Dec 13 03:54:36.837730 kubelet[1417]: I1213 03:54:36.837298 1417 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-bpf-maps\") pod \"8275add1-72e9-459f-81e0-2aa41f024067\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " Dec 13 03:54:36.837730 kubelet[1417]: I1213 03:54:36.837327 1417 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm62f\" (UniqueName: \"kubernetes.io/projected/8275add1-72e9-459f-81e0-2aa41f024067-kube-api-access-cm62f\") pod \"8275add1-72e9-459f-81e0-2aa41f024067\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " Dec 13 03:54:36.837730 kubelet[1417]: I1213 03:54:36.837354 1417 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-cilium-run\") pod \"8275add1-72e9-459f-81e0-2aa41f024067\" (UID: \"8275add1-72e9-459f-81e0-2aa41f024067\") " Dec 13 03:54:36.837730 kubelet[1417]: I1213 03:54:36.837422 1417 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8275add1-72e9-459f-81e0-2aa41f024067" (UID: "8275add1-72e9-459f-81e0-2aa41f024067"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:36.837730 kubelet[1417]: I1213 03:54:36.837463 1417 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8275add1-72e9-459f-81e0-2aa41f024067" (UID: "8275add1-72e9-459f-81e0-2aa41f024067"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:36.837730 kubelet[1417]: I1213 03:54:36.837486 1417 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8275add1-72e9-459f-81e0-2aa41f024067" (UID: "8275add1-72e9-459f-81e0-2aa41f024067"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:36.838092 kubelet[1417]: I1213 03:54:36.837894 1417 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8275add1-72e9-459f-81e0-2aa41f024067" (UID: "8275add1-72e9-459f-81e0-2aa41f024067"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:36.841097 kubelet[1417]: I1213 03:54:36.841057 1417 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-hostproc" (OuterVolumeSpecName: "hostproc") pod "8275add1-72e9-459f-81e0-2aa41f024067" (UID: "8275add1-72e9-459f-81e0-2aa41f024067"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:36.841312 kubelet[1417]: I1213 03:54:36.841281 1417 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-cni-path" (OuterVolumeSpecName: "cni-path") pod "8275add1-72e9-459f-81e0-2aa41f024067" (UID: "8275add1-72e9-459f-81e0-2aa41f024067"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:36.841435 kubelet[1417]: I1213 03:54:36.841414 1417 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8275add1-72e9-459f-81e0-2aa41f024067" (UID: "8275add1-72e9-459f-81e0-2aa41f024067"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:36.841565 kubelet[1417]: I1213 03:54:36.841544 1417 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8275add1-72e9-459f-81e0-2aa41f024067" (UID: "8275add1-72e9-459f-81e0-2aa41f024067"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:36.843733 kubelet[1417]: I1213 03:54:36.843707 1417 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8275add1-72e9-459f-81e0-2aa41f024067-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8275add1-72e9-459f-81e0-2aa41f024067" (UID: "8275add1-72e9-459f-81e0-2aa41f024067"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:54:36.849313 systemd[1]: var-lib-kubelet-pods-8275add1\x2d72e9\x2d459f\x2d81e0\x2d2aa41f024067-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 03:54:36.850897 kubelet[1417]: I1213 03:54:36.850851 1417 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8275add1-72e9-459f-81e0-2aa41f024067" (UID: "8275add1-72e9-459f-81e0-2aa41f024067"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:36.851003 kubelet[1417]: I1213 03:54:36.850941 1417 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8275add1-72e9-459f-81e0-2aa41f024067" (UID: "8275add1-72e9-459f-81e0-2aa41f024067"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:54:36.851860 kubelet[1417]: I1213 03:54:36.851828 1417 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8275add1-72e9-459f-81e0-2aa41f024067-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8275add1-72e9-459f-81e0-2aa41f024067" (UID: "8275add1-72e9-459f-81e0-2aa41f024067"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:54:36.854425 systemd[1]: var-lib-kubelet-pods-8275add1\x2d72e9\x2d459f\x2d81e0\x2d2aa41f024067-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 03:54:36.858217 kubelet[1417]: I1213 03:54:36.857862 1417 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8275add1-72e9-459f-81e0-2aa41f024067-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "8275add1-72e9-459f-81e0-2aa41f024067" (UID: "8275add1-72e9-459f-81e0-2aa41f024067"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:54:36.861383 kubelet[1417]: E1213 03:54:36.861331 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:36.863252 kubelet[1417]: I1213 03:54:36.863220 1417 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8275add1-72e9-459f-81e0-2aa41f024067-kube-api-access-cm62f" (OuterVolumeSpecName: "kube-api-access-cm62f") pod "8275add1-72e9-459f-81e0-2aa41f024067" (UID: "8275add1-72e9-459f-81e0-2aa41f024067"). InnerVolumeSpecName "kube-api-access-cm62f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:54:36.863493 kubelet[1417]: I1213 03:54:36.863472 1417 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8275add1-72e9-459f-81e0-2aa41f024067-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8275add1-72e9-459f-81e0-2aa41f024067" (UID: "8275add1-72e9-459f-81e0-2aa41f024067"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:54:36.943691 kubelet[1417]: I1213 03:54:36.937617 1417 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-host-proc-sys-kernel\") on node \"172.24.4.199\" DevicePath \"\"" Dec 13 03:54:36.943691 kubelet[1417]: I1213 03:54:36.937683 1417 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-etc-cni-netd\") on node \"172.24.4.199\" DevicePath \"\"" Dec 13 03:54:36.943691 kubelet[1417]: I1213 03:54:36.937707 1417 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8275add1-72e9-459f-81e0-2aa41f024067-hubble-tls\") on node \"172.24.4.199\" DevicePath \"\"" Dec 13 03:54:36.943691 kubelet[1417]: I1213 03:54:36.937730 1417 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-cilium-cgroup\") on node \"172.24.4.199\" DevicePath \"\"" Dec 13 03:54:36.943691 kubelet[1417]: I1213 03:54:36.937757 1417 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-cni-path\") on node \"172.24.4.199\" DevicePath \"\"" Dec 13 03:54:36.943691 kubelet[1417]: I1213 03:54:36.937780 1417 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-xtables-lock\") on node \"172.24.4.199\" DevicePath \"\"" Dec 13 03:54:36.943691 kubelet[1417]: I1213 03:54:36.937802 1417 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-bpf-maps\") on node \"172.24.4.199\" DevicePath \"\"" Dec 13 03:54:36.943691 kubelet[1417]: I1213 03:54:36.937823 1417 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-cm62f\" (UniqueName: \"kubernetes.io/projected/8275add1-72e9-459f-81e0-2aa41f024067-kube-api-access-cm62f\") on node \"172.24.4.199\" DevicePath \"\"" Dec 13 03:54:36.943691 kubelet[1417]: I1213 03:54:36.937844 1417 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-cilium-run\") on node \"172.24.4.199\" DevicePath \"\"" Dec 13 03:54:36.943691 kubelet[1417]: I1213 03:54:36.937864 1417 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8275add1-72e9-459f-81e0-2aa41f024067-clustermesh-secrets\") on node \"172.24.4.199\" DevicePath \"\"" Dec 13 03:54:36.943691 kubelet[1417]: I1213 03:54:36.937885 1417 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-lib-modules\") on node \"172.24.4.199\" DevicePath \"\"" Dec 13 03:54:36.943691 kubelet[1417]: I1213 03:54:36.937905 1417 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-host-proc-sys-net\") on node \"172.24.4.199\" DevicePath \"\"" Dec 13 03:54:36.943691 kubelet[1417]: I1213 03:54:36.937927 1417 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8275add1-72e9-459f-81e0-2aa41f024067-cilium-ipsec-secrets\") on node \"172.24.4.199\" DevicePath \"\"" Dec 13 03:54:36.943691 kubelet[1417]: I1213 03:54:36.937951 1417 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8275add1-72e9-459f-81e0-2aa41f024067-hostproc\") on node \"172.24.4.199\" DevicePath \"\"" Dec 13 03:54:36.943691 kubelet[1417]: I1213 03:54:36.938003 1417 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8275add1-72e9-459f-81e0-2aa41f024067-cilium-config-path\") on node \"172.24.4.199\" DevicePath \"\"" Dec 13 03:54:37.033636 kubelet[1417]: E1213 03:54:37.033073 1417 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 03:54:37.036342 systemd[1]: var-lib-kubelet-pods-8275add1\x2d72e9\x2d459f\x2d81e0\x2d2aa41f024067-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcm62f.mount: Deactivated successfully. Dec 13 03:54:37.036619 systemd[1]: var-lib-kubelet-pods-8275add1\x2d72e9\x2d459f\x2d81e0\x2d2aa41f024067-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 03:54:37.087232 systemd[1]: Removed slice kubepods-burstable-pod8275add1_72e9_459f_81e0_2aa41f024067.slice. Dec 13 03:54:37.353494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3961505247.mount: Deactivated successfully. Dec 13 03:54:37.682178 kubelet[1417]: I1213 03:54:37.681919 1417 scope.go:117] "RemoveContainer" containerID="701335312af150b09f9025206167003b51ecc170b1034637d4dd4b065c86a75b" Dec 13 03:54:37.689386 env[1143]: time="2024-12-13T03:54:37.689312068Z" level=info msg="RemoveContainer for \"701335312af150b09f9025206167003b51ecc170b1034637d4dd4b065c86a75b\"" Dec 13 03:54:37.861897 kubelet[1417]: E1213 03:54:37.861793 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:37.902155 env[1143]: time="2024-12-13T03:54:37.902035995Z" level=info msg="RemoveContainer for \"701335312af150b09f9025206167003b51ecc170b1034637d4dd4b065c86a75b\" returns successfully" Dec 13 03:54:38.715749 kubelet[1417]: I1213 03:54:38.715670 1417 setters.go:580] "Node became not ready" node="172.24.4.199" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T03:54:38Z","lastTransitionTime":"2024-12-13T03:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 03:54:38.721618 kubelet[1417]: W1213 03:54:38.721562 1417 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8275add1_72e9_459f_81e0_2aa41f024067.slice/cri-containerd-5dba487ab00f514b2a04157ae3cc55fe37c5c15863e4c8aad78d514bb665ff91.scope WatchSource:0}: container "5dba487ab00f514b2a04157ae3cc55fe37c5c15863e4c8aad78d514bb665ff91" in namespace "k8s.io": not found Dec 13 03:54:38.862772 kubelet[1417]: E1213 03:54:38.862704 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:38.920424 kubelet[1417]: I1213 03:54:38.920332 1417 topology_manager.go:215] "Topology Admit Handler" podUID="532c6dec-c72f-4b47-88a8-4d2cc54dbb98" podNamespace="kube-system" podName="cilium-4wqdc" Dec 13 03:54:38.920896 kubelet[1417]: E1213 03:54:38.920865 1417 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8275add1-72e9-459f-81e0-2aa41f024067" containerName="mount-cgroup" Dec 13 03:54:38.921330 kubelet[1417]: I1213 03:54:38.921301 1417 memory_manager.go:354] "RemoveStaleState removing state" podUID="8275add1-72e9-459f-81e0-2aa41f024067" containerName="mount-cgroup" Dec 13 03:54:38.921610 kubelet[1417]: I1213 03:54:38.921583 1417 memory_manager.go:354] "RemoveStaleState removing state" podUID="8275add1-72e9-459f-81e0-2aa41f024067" containerName="mount-cgroup" Dec 13 03:54:38.921950 kubelet[1417]: E1213 03:54:38.921920 1417 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8275add1-72e9-459f-81e0-2aa41f024067" containerName="mount-cgroup" Dec 13 03:54:38.935958 systemd[1]: Created slice kubepods-burstable-pod532c6dec_c72f_4b47_88a8_4d2cc54dbb98.slice. Dec 13 03:54:38.954959 kubelet[1417]: I1213 03:54:38.954877 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/532c6dec-c72f-4b47-88a8-4d2cc54dbb98-xtables-lock\") pod \"cilium-4wqdc\" (UID: \"532c6dec-c72f-4b47-88a8-4d2cc54dbb98\") " pod="kube-system/cilium-4wqdc" Dec 13 03:54:38.955259 kubelet[1417]: I1213 03:54:38.955000 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/532c6dec-c72f-4b47-88a8-4d2cc54dbb98-cilium-ipsec-secrets\") pod \"cilium-4wqdc\" (UID: \"532c6dec-c72f-4b47-88a8-4d2cc54dbb98\") " pod="kube-system/cilium-4wqdc" Dec 13 03:54:38.955259 kubelet[1417]: I1213 03:54:38.955057 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/532c6dec-c72f-4b47-88a8-4d2cc54dbb98-host-proc-sys-net\") pod \"cilium-4wqdc\" (UID: \"532c6dec-c72f-4b47-88a8-4d2cc54dbb98\") " pod="kube-system/cilium-4wqdc" Dec 13 03:54:38.955259 kubelet[1417]: I1213 03:54:38.955100 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/532c6dec-c72f-4b47-88a8-4d2cc54dbb98-hostproc\") pod \"cilium-4wqdc\" (UID: \"532c6dec-c72f-4b47-88a8-4d2cc54dbb98\") " pod="kube-system/cilium-4wqdc" Dec 13 03:54:38.955259 kubelet[1417]: I1213 03:54:38.955141 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/532c6dec-c72f-4b47-88a8-4d2cc54dbb98-cilium-cgroup\") pod \"cilium-4wqdc\" (UID: \"532c6dec-c72f-4b47-88a8-4d2cc54dbb98\") " pod="kube-system/cilium-4wqdc" Dec 13 03:54:38.955259 kubelet[1417]: I1213 03:54:38.955190 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5xzq\" (UniqueName: \"kubernetes.io/projected/532c6dec-c72f-4b47-88a8-4d2cc54dbb98-kube-api-access-f5xzq\") pod \"cilium-4wqdc\" (UID: \"532c6dec-c72f-4b47-88a8-4d2cc54dbb98\") " pod="kube-system/cilium-4wqdc" Dec 13 03:54:38.955759 kubelet[1417]: I1213 03:54:38.955265 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/532c6dec-c72f-4b47-88a8-4d2cc54dbb98-etc-cni-netd\") pod \"cilium-4wqdc\" (UID: \"532c6dec-c72f-4b47-88a8-4d2cc54dbb98\") " pod="kube-system/cilium-4wqdc" Dec 13 03:54:38.955759 kubelet[1417]: I1213 03:54:38.955312 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/532c6dec-c72f-4b47-88a8-4d2cc54dbb98-clustermesh-secrets\") pod \"cilium-4wqdc\" (UID: \"532c6dec-c72f-4b47-88a8-4d2cc54dbb98\") " pod="kube-system/cilium-4wqdc" Dec 13 03:54:38.955759 kubelet[1417]: I1213 03:54:38.955358 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/532c6dec-c72f-4b47-88a8-4d2cc54dbb98-cilium-config-path\") pod \"cilium-4wqdc\" (UID: \"532c6dec-c72f-4b47-88a8-4d2cc54dbb98\") " pod="kube-system/cilium-4wqdc" Dec 13 03:54:38.955759 kubelet[1417]: I1213 03:54:38.955454 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/532c6dec-c72f-4b47-88a8-4d2cc54dbb98-cilium-run\") pod \"cilium-4wqdc\" (UID: \"532c6dec-c72f-4b47-88a8-4d2cc54dbb98\") " pod="kube-system/cilium-4wqdc" Dec 13 03:54:38.955759 kubelet[1417]: I1213 03:54:38.955496 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/532c6dec-c72f-4b47-88a8-4d2cc54dbb98-bpf-maps\") pod \"cilium-4wqdc\" (UID: \"532c6dec-c72f-4b47-88a8-4d2cc54dbb98\") " pod="kube-system/cilium-4wqdc" Dec 13 03:54:38.955759 kubelet[1417]: I1213 03:54:38.955539 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/532c6dec-c72f-4b47-88a8-4d2cc54dbb98-host-proc-sys-kernel\") pod \"cilium-4wqdc\" (UID: \"532c6dec-c72f-4b47-88a8-4d2cc54dbb98\") " pod="kube-system/cilium-4wqdc" Dec 13 03:54:38.955759 kubelet[1417]: I1213 03:54:38.955581 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/532c6dec-c72f-4b47-88a8-4d2cc54dbb98-cni-path\") pod \"cilium-4wqdc\" (UID: \"532c6dec-c72f-4b47-88a8-4d2cc54dbb98\") " pod="kube-system/cilium-4wqdc" Dec 13 03:54:38.955759 kubelet[1417]: I1213 03:54:38.955623 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/532c6dec-c72f-4b47-88a8-4d2cc54dbb98-lib-modules\") pod \"cilium-4wqdc\" (UID: \"532c6dec-c72f-4b47-88a8-4d2cc54dbb98\") " pod="kube-system/cilium-4wqdc" Dec 13 03:54:38.955759 kubelet[1417]: I1213 03:54:38.955667 1417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/532c6dec-c72f-4b47-88a8-4d2cc54dbb98-hubble-tls\") pod \"cilium-4wqdc\" (UID: \"532c6dec-c72f-4b47-88a8-4d2cc54dbb98\") " pod="kube-system/cilium-4wqdc" Dec 13 03:54:39.078731 kubelet[1417]: I1213 03:54:39.078555 1417 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8275add1-72e9-459f-81e0-2aa41f024067" path="/var/lib/kubelet/pods/8275add1-72e9-459f-81e0-2aa41f024067/volumes" Dec 13 03:54:39.251610 env[1143]: time="2024-12-13T03:54:39.251543760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4wqdc,Uid:532c6dec-c72f-4b47-88a8-4d2cc54dbb98,Namespace:kube-system,Attempt:0,}" Dec 13 03:54:39.341659 env[1143]: time="2024-12-13T03:54:39.341357683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:54:39.342182 env[1143]: time="2024-12-13T03:54:39.341463783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:54:39.342465 env[1143]: time="2024-12-13T03:54:39.342377984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:54:39.343068 env[1143]: time="2024-12-13T03:54:39.342956835Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/467600a0482648309674057f508bad70157ab48dc918533bf21b11c1c7b90cc7 pid=3178 runtime=io.containerd.runc.v2 Dec 13 03:54:39.367277 systemd[1]: Started cri-containerd-467600a0482648309674057f508bad70157ab48dc918533bf21b11c1c7b90cc7.scope. Dec 13 03:54:39.417583 env[1143]: time="2024-12-13T03:54:39.417497419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4wqdc,Uid:532c6dec-c72f-4b47-88a8-4d2cc54dbb98,Namespace:kube-system,Attempt:0,} returns sandbox id \"467600a0482648309674057f508bad70157ab48dc918533bf21b11c1c7b90cc7\"" Dec 13 03:54:39.421681 env[1143]: time="2024-12-13T03:54:39.421641172Z" level=info msg="CreateContainer within sandbox \"467600a0482648309674057f508bad70157ab48dc918533bf21b11c1c7b90cc7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 03:54:39.449358 env[1143]: time="2024-12-13T03:54:39.449291886Z" level=info msg="CreateContainer within sandbox \"467600a0482648309674057f508bad70157ab48dc918533bf21b11c1c7b90cc7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"46fb1f4147005bbc9552589d5854f8968137e117f9241aaf8626a028ce0c318e\"" Dec 13 03:54:39.450413 env[1143]: time="2024-12-13T03:54:39.450373613Z" level=info msg="StartContainer for \"46fb1f4147005bbc9552589d5854f8968137e117f9241aaf8626a028ce0c318e\"" Dec 13 03:54:39.483390 systemd[1]: Started cri-containerd-46fb1f4147005bbc9552589d5854f8968137e117f9241aaf8626a028ce0c318e.scope. Dec 13 03:54:39.546409 env[1143]: time="2024-12-13T03:54:39.546339541Z" level=info msg="StartContainer for \"46fb1f4147005bbc9552589d5854f8968137e117f9241aaf8626a028ce0c318e\" returns successfully" Dec 13 03:54:39.577797 systemd[1]: cri-containerd-46fb1f4147005bbc9552589d5854f8968137e117f9241aaf8626a028ce0c318e.scope: Deactivated successfully. Dec 13 03:54:39.863719 kubelet[1417]: E1213 03:54:39.863639 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:39.942319 env[1143]: time="2024-12-13T03:54:39.942202925Z" level=info msg="shim disconnected" id=46fb1f4147005bbc9552589d5854f8968137e117f9241aaf8626a028ce0c318e Dec 13 03:54:39.943398 env[1143]: time="2024-12-13T03:54:39.943336760Z" level=warning msg="cleaning up after shim disconnected" id=46fb1f4147005bbc9552589d5854f8968137e117f9241aaf8626a028ce0c318e namespace=k8s.io Dec 13 03:54:39.943592 env[1143]: time="2024-12-13T03:54:39.943553959Z" level=info msg="cleaning up dead shim" Dec 13 03:54:39.973477 env[1143]: time="2024-12-13T03:54:39.973348059Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:54:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3263 runtime=io.containerd.runc.v2\n" Dec 13 03:54:40.244048 env[1143]: time="2024-12-13T03:54:40.243916988Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:54:40.246287 env[1143]: time="2024-12-13T03:54:40.246231085Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:54:40.249164 env[1143]: time="2024-12-13T03:54:40.249098904Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:54:40.250741 env[1143]: time="2024-12-13T03:54:40.250676173Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 03:54:40.254357 env[1143]: time="2024-12-13T03:54:40.254302190Z" level=info msg="CreateContainer within sandbox \"8ecb8907b93a5b573049dfa7bfa38f7bf0b45da9640b7b455b610b97a5875fbb\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 03:54:40.270687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount523746887.mount: Deactivated successfully. Dec 13 03:54:40.277488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1739003517.mount: Deactivated successfully. Dec 13 03:54:40.289231 env[1143]: time="2024-12-13T03:54:40.289154136Z" level=info msg="CreateContainer within sandbox \"8ecb8907b93a5b573049dfa7bfa38f7bf0b45da9640b7b455b610b97a5875fbb\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ade5ba00eca80165da63ef809bec8ab903c225eed1e62b7e9abe24b6bff135cf\"" Dec 13 03:54:40.290702 env[1143]: time="2024-12-13T03:54:40.290609786Z" level=info msg="StartContainer for \"ade5ba00eca80165da63ef809bec8ab903c225eed1e62b7e9abe24b6bff135cf\"" Dec 13 03:54:40.320225 systemd[1]: Started cri-containerd-ade5ba00eca80165da63ef809bec8ab903c225eed1e62b7e9abe24b6bff135cf.scope. Dec 13 03:54:40.555056 env[1143]: time="2024-12-13T03:54:40.554799368Z" level=info msg="StartContainer for \"ade5ba00eca80165da63ef809bec8ab903c225eed1e62b7e9abe24b6bff135cf\" returns successfully" Dec 13 03:54:40.710741 env[1143]: time="2024-12-13T03:54:40.710655761Z" level=info msg="CreateContainer within sandbox \"467600a0482648309674057f508bad70157ab48dc918533bf21b11c1c7b90cc7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 03:54:40.747366 env[1143]: time="2024-12-13T03:54:40.747255387Z" level=info msg="CreateContainer within sandbox \"467600a0482648309674057f508bad70157ab48dc918533bf21b11c1c7b90cc7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"634272ded98118f08e07008399d514e390dd7c3afd4e60de1c4ca0f1c9429285\"" Dec 13 03:54:40.749318 env[1143]: time="2024-12-13T03:54:40.749252176Z" level=info msg="StartContainer for \"634272ded98118f08e07008399d514e390dd7c3afd4e60de1c4ca0f1c9429285\"" Dec 13 03:54:40.771309 kubelet[1417]: I1213 03:54:40.771234 1417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-b4kvs" podStartSLOduration=2.06721251 podStartE2EDuration="6.771194497s" podCreationTimestamp="2024-12-13 03:54:34 +0000 UTC" firstStartedPulling="2024-12-13 03:54:35.548245096 +0000 UTC m=+89.889121063" lastFinishedPulling="2024-12-13 03:54:40.252227093 +0000 UTC m=+94.593103050" observedRunningTime="2024-12-13 03:54:40.769175215 +0000 UTC m=+95.110051182" watchObservedRunningTime="2024-12-13 03:54:40.771194497 +0000 UTC m=+95.112070464" Dec 13 03:54:40.802906 systemd[1]: Started cri-containerd-634272ded98118f08e07008399d514e390dd7c3afd4e60de1c4ca0f1c9429285.scope. Dec 13 03:54:40.865682 kubelet[1417]: E1213 03:54:40.864373 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:40.866179 env[1143]: time="2024-12-13T03:54:40.864721451Z" level=info msg="StartContainer for \"634272ded98118f08e07008399d514e390dd7c3afd4e60de1c4ca0f1c9429285\" returns successfully" Dec 13 03:54:40.892613 systemd[1]: cri-containerd-634272ded98118f08e07008399d514e390dd7c3afd4e60de1c4ca0f1c9429285.scope: Deactivated successfully. Dec 13 03:54:40.942476 env[1143]: time="2024-12-13T03:54:40.942404936Z" level=info msg="shim disconnected" id=634272ded98118f08e07008399d514e390dd7c3afd4e60de1c4ca0f1c9429285 Dec 13 03:54:40.942793 env[1143]: time="2024-12-13T03:54:40.942773570Z" level=warning msg="cleaning up after shim disconnected" id=634272ded98118f08e07008399d514e390dd7c3afd4e60de1c4ca0f1c9429285 namespace=k8s.io Dec 13 03:54:40.942880 env[1143]: time="2024-12-13T03:54:40.942865374Z" level=info msg="cleaning up dead shim" Dec 13 03:54:40.951827 env[1143]: time="2024-12-13T03:54:40.951765009Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:54:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3365 runtime=io.containerd.runc.v2\n" Dec 13 03:54:41.717683 env[1143]: time="2024-12-13T03:54:41.717578657Z" level=info msg="CreateContainer within sandbox \"467600a0482648309674057f508bad70157ab48dc918533bf21b11c1c7b90cc7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 03:54:41.864861 kubelet[1417]: E1213 03:54:41.864775 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:42.024043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1368934928.mount: Deactivated successfully. Dec 13 03:54:42.035498 kubelet[1417]: E1213 03:54:42.035395 1417 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 03:54:42.040299 env[1143]: time="2024-12-13T03:54:42.040138965Z" level=info msg="CreateContainer within sandbox \"467600a0482648309674057f508bad70157ab48dc918533bf21b11c1c7b90cc7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"74f0c67591285731a8df7a273f70ac53b1cb05bfed867cb6c7d1655693aab388\"" Dec 13 03:54:42.041425 env[1143]: time="2024-12-13T03:54:42.041365454Z" level=info msg="StartContainer for \"74f0c67591285731a8df7a273f70ac53b1cb05bfed867cb6c7d1655693aab388\"" Dec 13 03:54:42.092229 systemd[1]: Started cri-containerd-74f0c67591285731a8df7a273f70ac53b1cb05bfed867cb6c7d1655693aab388.scope. Dec 13 03:54:42.150095 env[1143]: time="2024-12-13T03:54:42.149962503Z" level=info msg="StartContainer for \"74f0c67591285731a8df7a273f70ac53b1cb05bfed867cb6c7d1655693aab388\" returns successfully" Dec 13 03:54:42.158655 systemd[1]: cri-containerd-74f0c67591285731a8df7a273f70ac53b1cb05bfed867cb6c7d1655693aab388.scope: Deactivated successfully. Dec 13 03:54:42.183320 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74f0c67591285731a8df7a273f70ac53b1cb05bfed867cb6c7d1655693aab388-rootfs.mount: Deactivated successfully. Dec 13 03:54:42.192130 env[1143]: time="2024-12-13T03:54:42.192072967Z" level=info msg="shim disconnected" id=74f0c67591285731a8df7a273f70ac53b1cb05bfed867cb6c7d1655693aab388 Dec 13 03:54:42.192394 env[1143]: time="2024-12-13T03:54:42.192372090Z" level=warning msg="cleaning up after shim disconnected" id=74f0c67591285731a8df7a273f70ac53b1cb05bfed867cb6c7d1655693aab388 namespace=k8s.io Dec 13 03:54:42.192476 env[1143]: time="2024-12-13T03:54:42.192460265Z" level=info msg="cleaning up dead shim" Dec 13 03:54:42.200899 env[1143]: time="2024-12-13T03:54:42.200839838Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:54:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3423 runtime=io.containerd.runc.v2\n" Dec 13 03:54:42.727539 env[1143]: time="2024-12-13T03:54:42.727435667Z" level=info msg="CreateContainer within sandbox \"467600a0482648309674057f508bad70157ab48dc918533bf21b11c1c7b90cc7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 03:54:42.760280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount730774356.mount: Deactivated successfully. Dec 13 03:54:42.777875 env[1143]: time="2024-12-13T03:54:42.777763929Z" level=info msg="CreateContainer within sandbox \"467600a0482648309674057f508bad70157ab48dc918533bf21b11c1c7b90cc7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f13e657e60ba2f3bea38de86d74187b691a19e18893be6c0ddb43557f16d6fc4\"" Dec 13 03:54:42.779500 env[1143]: time="2024-12-13T03:54:42.779441125Z" level=info msg="StartContainer for \"f13e657e60ba2f3bea38de86d74187b691a19e18893be6c0ddb43557f16d6fc4\"" Dec 13 03:54:42.820960 systemd[1]: Started cri-containerd-f13e657e60ba2f3bea38de86d74187b691a19e18893be6c0ddb43557f16d6fc4.scope. Dec 13 03:54:42.859151 systemd[1]: cri-containerd-f13e657e60ba2f3bea38de86d74187b691a19e18893be6c0ddb43557f16d6fc4.scope: Deactivated successfully. Dec 13 03:54:42.863049 env[1143]: time="2024-12-13T03:54:42.861556790Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod532c6dec_c72f_4b47_88a8_4d2cc54dbb98.slice/cri-containerd-f13e657e60ba2f3bea38de86d74187b691a19e18893be6c0ddb43557f16d6fc4.scope/memory.events\": no such file or directory" Dec 13 03:54:42.866303 kubelet[1417]: E1213 03:54:42.866260 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:42.867185 env[1143]: time="2024-12-13T03:54:42.867128097Z" level=info msg="StartContainer for \"f13e657e60ba2f3bea38de86d74187b691a19e18893be6c0ddb43557f16d6fc4\" returns successfully" Dec 13 03:54:42.891757 env[1143]: time="2024-12-13T03:54:42.891691882Z" level=info msg="shim disconnected" id=f13e657e60ba2f3bea38de86d74187b691a19e18893be6c0ddb43557f16d6fc4 Dec 13 03:54:42.891757 env[1143]: time="2024-12-13T03:54:42.891753509Z" level=warning msg="cleaning up after shim disconnected" id=f13e657e60ba2f3bea38de86d74187b691a19e18893be6c0ddb43557f16d6fc4 namespace=k8s.io Dec 13 03:54:42.891757 env[1143]: time="2024-12-13T03:54:42.891766693Z" level=info msg="cleaning up dead shim" Dec 13 03:54:42.900421 env[1143]: time="2024-12-13T03:54:42.900370768Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:54:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3481 runtime=io.containerd.runc.v2\n" Dec 13 03:54:43.759071 env[1143]: time="2024-12-13T03:54:43.758046226Z" level=info msg="CreateContainer within sandbox \"467600a0482648309674057f508bad70157ab48dc918533bf21b11c1c7b90cc7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 03:54:43.867491 kubelet[1417]: E1213 03:54:43.867409 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:44.525280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount405525499.mount: Deactivated successfully. Dec 13 03:54:44.690885 env[1143]: time="2024-12-13T03:54:44.690724517Z" level=info msg="CreateContainer within sandbox \"467600a0482648309674057f508bad70157ab48dc918533bf21b11c1c7b90cc7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9b4275a94221b91e3370103ba8edb077c54a7b3b250947ba9dc29ad1186648f0\"" Dec 13 03:54:44.693072 env[1143]: time="2024-12-13T03:54:44.692960234Z" level=info msg="StartContainer for \"9b4275a94221b91e3370103ba8edb077c54a7b3b250947ba9dc29ad1186648f0\"" Dec 13 03:54:44.762695 systemd[1]: Started cri-containerd-9b4275a94221b91e3370103ba8edb077c54a7b3b250947ba9dc29ad1186648f0.scope. Dec 13 03:54:44.939939 kubelet[1417]: E1213 03:54:44.869155 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:44.998299 env[1143]: time="2024-12-13T03:54:44.998186220Z" level=info msg="StartContainer for \"9b4275a94221b91e3370103ba8edb077c54a7b3b250947ba9dc29ad1186648f0\" returns successfully" Dec 13 03:54:45.516887 systemd[1]: run-containerd-runc-k8s.io-9b4275a94221b91e3370103ba8edb077c54a7b3b250947ba9dc29ad1186648f0-runc.33Xk6R.mount: Deactivated successfully. Dec 13 03:54:45.870249 kubelet[1417]: E1213 03:54:45.870003 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:46.662049 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 03:54:46.715206 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Dec 13 03:54:46.766008 kubelet[1417]: E1213 03:54:46.765904 1417 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:46.870789 kubelet[1417]: E1213 03:54:46.870559 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:47.872528 kubelet[1417]: E1213 03:54:47.872453 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:48.874373 kubelet[1417]: E1213 03:54:48.874271 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:49.875002 kubelet[1417]: E1213 03:54:49.874922 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:50.365171 systemd-networkd[974]: lxc_health: Link UP Dec 13 03:54:50.372194 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 03:54:50.372849 systemd-networkd[974]: lxc_health: Gained carrier Dec 13 03:54:50.733382 kubelet[1417]: E1213 03:54:50.733221 1417 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:58988->127.0.0.1:39703: write tcp 127.0.0.1:58988->127.0.0.1:39703: write: connection reset by peer Dec 13 03:54:50.876056 kubelet[1417]: E1213 03:54:50.875926 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:51.297470 kubelet[1417]: I1213 03:54:51.297373 1417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4wqdc" podStartSLOduration=14.297350263 podStartE2EDuration="14.297350263s" podCreationTimestamp="2024-12-13 03:54:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 03:54:46.841812925 +0000 UTC m=+101.182688952" watchObservedRunningTime="2024-12-13 03:54:51.297350263 +0000 UTC m=+105.638226230" Dec 13 03:54:51.877355 kubelet[1417]: E1213 03:54:51.877229 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:52.022452 systemd-networkd[974]: lxc_health: Gained IPv6LL Dec 13 03:54:52.877724 kubelet[1417]: E1213 03:54:52.877652 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:52.917676 systemd[1]: run-containerd-runc-k8s.io-9b4275a94221b91e3370103ba8edb077c54a7b3b250947ba9dc29ad1186648f0-runc.ZbGHoS.mount: Deactivated successfully. Dec 13 03:54:53.878944 kubelet[1417]: E1213 03:54:53.878849 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:54.879710 kubelet[1417]: E1213 03:54:54.879672 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:55.165219 kubelet[1417]: E1213 03:54:55.164806 1417 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:59008->127.0.0.1:39703: write tcp 127.0.0.1:59008->127.0.0.1:39703: write: broken pipe Dec 13 03:54:55.881310 kubelet[1417]: E1213 03:54:55.881206 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:56.883329 kubelet[1417]: E1213 03:54:56.883251 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:57.884312 kubelet[1417]: E1213 03:54:57.884251 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:58.885817 kubelet[1417]: E1213 03:54:58.885753 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:54:59.887518 kubelet[1417]: E1213 03:54:59.887449 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:55:00.888571 kubelet[1417]: E1213 03:55:00.888491 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:55:01.890211 kubelet[1417]: E1213 03:55:01.890138 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:55:02.890682 kubelet[1417]: E1213 03:55:02.890584 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:55:03.891306 kubelet[1417]: E1213 03:55:03.891164 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:55:04.892341 kubelet[1417]: E1213 03:55:04.892167 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:55:05.892515 kubelet[1417]: E1213 03:55:05.892455 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"