Mar 17 19:52:33.888926 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Mar 17 17:12:34 -00 2025 Mar 17 19:52:33.888978 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 19:52:33.889002 kernel: BIOS-provided physical RAM map: Mar 17 19:52:33.889024 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 19:52:33.889041 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 19:52:33.889057 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 19:52:33.889077 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Mar 17 19:52:33.889094 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Mar 17 19:52:33.889111 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 19:52:33.889127 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 19:52:33.889143 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Mar 17 19:52:33.889160 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 17 19:52:33.889179 kernel: NX (Execute Disable) protection: active Mar 17 19:52:33.889196 kernel: SMBIOS 3.0.0 present. Mar 17 19:52:33.889216 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Mar 17 19:52:33.889234 kernel: Hypervisor detected: KVM Mar 17 19:52:33.889252 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 19:52:33.889269 kernel: kvm-clock: cpu 0, msr 7519a001, primary cpu clock Mar 17 19:52:33.889289 kernel: kvm-clock: using sched offset of 4132488556 cycles Mar 17 19:52:33.889309 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 19:52:33.889327 kernel: tsc: Detected 1996.249 MHz processor Mar 17 19:52:33.889346 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 19:52:33.889365 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 19:52:33.889384 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Mar 17 19:52:33.889402 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 19:52:33.889420 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Mar 17 19:52:33.889438 kernel: ACPI: Early table checksum verification disabled Mar 17 19:52:33.889459 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Mar 17 19:52:33.889477 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 19:52:33.889496 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 19:52:33.889515 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 19:52:33.889532 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Mar 17 19:52:33.889550 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 19:52:33.889568 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 19:52:33.889587 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Mar 17 19:52:33.889608 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Mar 17 19:52:33.889626 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Mar 17 19:52:33.889644 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Mar 17 19:52:33.889661 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Mar 17 19:52:33.894338 kernel: No NUMA configuration found Mar 17 19:52:33.894368 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Mar 17 19:52:33.894388 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Mar 17 19:52:33.894410 kernel: Zone ranges: Mar 17 19:52:33.894427 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 19:52:33.894441 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 17 19:52:33.894455 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Mar 17 19:52:33.894469 kernel: Movable zone start for each node Mar 17 19:52:33.894483 kernel: Early memory node ranges Mar 17 19:52:33.894497 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 19:52:33.894511 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Mar 17 19:52:33.894528 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Mar 17 19:52:33.894542 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Mar 17 19:52:33.894556 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 19:52:33.894571 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 19:52:33.894585 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Mar 17 19:52:33.894599 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 19:52:33.894614 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 19:52:33.894628 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 19:52:33.894642 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 19:52:33.894659 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 19:52:33.894692 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 19:52:33.894707 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 19:52:33.894721 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 19:52:33.894736 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 19:52:33.894750 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 17 19:52:33.894764 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Mar 17 19:52:33.894779 kernel: Booting paravirtualized kernel on KVM Mar 17 19:52:33.894793 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 19:52:33.894811 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Mar 17 19:52:33.894826 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Mar 17 19:52:33.894840 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Mar 17 19:52:33.894854 kernel: pcpu-alloc: [0] 0 1 Mar 17 19:52:33.894868 kernel: kvm-guest: stealtime: cpu 0, msr 13bc1c0c0 Mar 17 19:52:33.894882 kernel: kvm-guest: PV spinlocks disabled, no host support Mar 17 19:52:33.894896 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Mar 17 19:52:33.894910 kernel: Policy zone: Normal Mar 17 19:52:33.894927 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 19:52:33.894945 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 19:52:33.894959 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 19:52:33.894973 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 19:52:33.894988 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 19:52:33.895003 kernel: Memory: 3968276K/4193772K available (12294K kernel code, 2278K rwdata, 13724K rodata, 47472K init, 4108K bss, 225236K reserved, 0K cma-reserved) Mar 17 19:52:33.895017 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 19:52:33.895031 kernel: ftrace: allocating 34580 entries in 136 pages Mar 17 19:52:33.895045 kernel: ftrace: allocated 136 pages with 2 groups Mar 17 19:52:33.895062 kernel: rcu: Hierarchical RCU implementation. Mar 17 19:52:33.895077 kernel: rcu: RCU event tracing is enabled. Mar 17 19:52:33.895092 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 19:52:33.895106 kernel: Rude variant of Tasks RCU enabled. Mar 17 19:52:33.895121 kernel: Tracing variant of Tasks RCU enabled. Mar 17 19:52:33.895135 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 19:52:33.895150 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 19:52:33.895164 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 17 19:52:33.895178 kernel: Console: colour VGA+ 80x25 Mar 17 19:52:33.895195 kernel: printk: console [tty0] enabled Mar 17 19:52:33.895209 kernel: printk: console [ttyS0] enabled Mar 17 19:52:33.895223 kernel: ACPI: Core revision 20210730 Mar 17 19:52:33.895237 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 19:52:33.895251 kernel: x2apic enabled Mar 17 19:52:33.895265 kernel: Switched APIC routing to physical x2apic. Mar 17 19:52:33.895280 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 19:52:33.895294 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 17 19:52:33.895309 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Mar 17 19:52:33.895325 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 17 19:52:33.895340 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 17 19:52:33.895355 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 19:52:33.895369 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 19:52:33.895383 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 19:52:33.895397 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 19:52:33.895411 kernel: Speculative Store Bypass: Vulnerable Mar 17 19:52:33.895426 kernel: x86/fpu: x87 FPU will use FXSAVE Mar 17 19:52:33.895440 kernel: Freeing SMP alternatives memory: 32K Mar 17 19:52:33.895456 kernel: pid_max: default: 32768 minimum: 301 Mar 17 19:52:33.895470 kernel: LSM: Security Framework initializing Mar 17 19:52:33.895484 kernel: SELinux: Initializing. Mar 17 19:52:33.895499 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 19:52:33.895514 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 19:52:33.895529 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Mar 17 19:52:33.895552 kernel: Performance Events: AMD PMU driver. Mar 17 19:52:33.895568 kernel: ... version: 0 Mar 17 19:52:33.895583 kernel: ... bit width: 48 Mar 17 19:52:33.895597 kernel: ... generic registers: 4 Mar 17 19:52:33.895612 kernel: ... value mask: 0000ffffffffffff Mar 17 19:52:33.895627 kernel: ... max period: 00007fffffffffff Mar 17 19:52:33.895644 kernel: ... fixed-purpose events: 0 Mar 17 19:52:33.895659 kernel: ... event mask: 000000000000000f Mar 17 19:52:33.895692 kernel: signal: max sigframe size: 1440 Mar 17 19:52:33.895707 kernel: rcu: Hierarchical SRCU implementation. Mar 17 19:52:33.895722 kernel: smp: Bringing up secondary CPUs ... Mar 17 19:52:33.895740 kernel: x86: Booting SMP configuration: Mar 17 19:52:33.895755 kernel: .... node #0, CPUs: #1 Mar 17 19:52:33.895769 kernel: kvm-clock: cpu 1, msr 7519a041, secondary cpu clock Mar 17 19:52:33.895784 kernel: kvm-guest: stealtime: cpu 1, msr 13bd1c0c0 Mar 17 19:52:33.895799 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 19:52:33.895814 kernel: smpboot: Max logical packages: 2 Mar 17 19:52:33.895829 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Mar 17 19:52:33.895844 kernel: devtmpfs: initialized Mar 17 19:52:33.895858 kernel: x86/mm: Memory block size: 128MB Mar 17 19:52:33.895876 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 19:52:33.895891 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 19:52:33.895906 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 19:52:33.895921 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 19:52:33.895936 kernel: audit: initializing netlink subsys (disabled) Mar 17 19:52:33.895951 kernel: audit: type=2000 audit(1742241153.355:1): state=initialized audit_enabled=0 res=1 Mar 17 19:52:33.895966 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 19:52:33.895981 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 19:52:33.895995 kernel: cpuidle: using governor menu Mar 17 19:52:33.896012 kernel: ACPI: bus type PCI registered Mar 17 19:52:33.896027 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 19:52:33.896042 kernel: dca service started, version 1.12.1 Mar 17 19:52:33.896057 kernel: PCI: Using configuration type 1 for base access Mar 17 19:52:33.896072 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 19:52:33.896087 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 19:52:33.896102 kernel: ACPI: Added _OSI(Module Device) Mar 17 19:52:33.896117 kernel: ACPI: Added _OSI(Processor Device) Mar 17 19:52:33.896131 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 19:52:33.896149 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 19:52:33.896164 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 19:52:33.896179 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 19:52:33.896194 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 19:52:33.896209 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 19:52:33.896224 kernel: ACPI: Interpreter enabled Mar 17 19:52:33.896238 kernel: ACPI: PM: (supports S0 S3 S5) Mar 17 19:52:33.896253 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 19:52:33.896269 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 19:52:33.896286 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 17 19:52:33.896330 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 19:52:33.896579 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 17 19:52:33.896797 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Mar 17 19:52:33.896824 kernel: acpiphp: Slot [3] registered Mar 17 19:52:33.896839 kernel: acpiphp: Slot [4] registered Mar 17 19:52:33.896854 kernel: acpiphp: Slot [5] registered Mar 17 19:52:33.896870 kernel: acpiphp: Slot [6] registered Mar 17 19:52:33.896891 kernel: acpiphp: Slot [7] registered Mar 17 19:52:33.896905 kernel: acpiphp: Slot [8] registered Mar 17 19:52:33.896920 kernel: acpiphp: Slot [9] registered Mar 17 19:52:33.896935 kernel: acpiphp: Slot [10] registered Mar 17 19:52:33.896950 kernel: acpiphp: Slot [11] registered Mar 17 19:52:33.896965 kernel: acpiphp: Slot [12] registered Mar 17 19:52:33.896979 kernel: acpiphp: Slot [13] registered Mar 17 19:52:33.896994 kernel: acpiphp: Slot [14] registered Mar 17 19:52:33.897009 kernel: acpiphp: Slot [15] registered Mar 17 19:52:33.897026 kernel: acpiphp: Slot [16] registered Mar 17 19:52:33.897041 kernel: acpiphp: Slot [17] registered Mar 17 19:52:33.897056 kernel: acpiphp: Slot [18] registered Mar 17 19:52:33.897070 kernel: acpiphp: Slot [19] registered Mar 17 19:52:33.897085 kernel: acpiphp: Slot [20] registered Mar 17 19:52:33.897100 kernel: acpiphp: Slot [21] registered Mar 17 19:52:33.897115 kernel: acpiphp: Slot [22] registered Mar 17 19:52:33.897129 kernel: acpiphp: Slot [23] registered Mar 17 19:52:33.897144 kernel: acpiphp: Slot [24] registered Mar 17 19:52:33.897158 kernel: acpiphp: Slot [25] registered Mar 17 19:52:33.897175 kernel: acpiphp: Slot [26] registered Mar 17 19:52:33.897190 kernel: acpiphp: Slot [27] registered Mar 17 19:52:33.897204 kernel: acpiphp: Slot [28] registered Mar 17 19:52:33.897219 kernel: acpiphp: Slot [29] registered Mar 17 19:52:33.897234 kernel: acpiphp: Slot [30] registered Mar 17 19:52:33.897248 kernel: acpiphp: Slot [31] registered Mar 17 19:52:33.897263 kernel: PCI host bridge to bus 0000:00 Mar 17 19:52:33.897423 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 19:52:33.897572 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 19:52:33.897746 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 19:52:33.897912 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 17 19:52:33.898049 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Mar 17 19:52:33.898185 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 19:52:33.898359 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 17 19:52:33.898505 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 17 19:52:33.898602 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Mar 17 19:52:33.901911 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Mar 17 19:52:33.902010 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Mar 17 19:52:33.902095 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Mar 17 19:52:33.902179 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Mar 17 19:52:33.902261 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Mar 17 19:52:33.902354 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Mar 17 19:52:33.902437 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Mar 17 19:52:33.902517 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Mar 17 19:52:33.902611 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Mar 17 19:52:33.902723 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Mar 17 19:52:33.902813 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Mar 17 19:52:33.902897 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Mar 17 19:52:33.902982 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Mar 17 19:52:33.903062 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 19:52:33.903150 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 17 19:52:33.903233 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Mar 17 19:52:33.903314 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Mar 17 19:52:33.903394 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Mar 17 19:52:33.903475 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Mar 17 19:52:33.903567 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Mar 17 19:52:33.903649 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Mar 17 19:52:33.903774 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Mar 17 19:52:33.903857 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Mar 17 19:52:33.903944 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Mar 17 19:52:33.904025 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Mar 17 19:52:33.904104 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Mar 17 19:52:33.904200 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 19:52:33.904282 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Mar 17 19:52:33.904362 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Mar 17 19:52:33.904442 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Mar 17 19:52:33.904454 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 19:52:33.904463 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 19:52:33.904471 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 19:52:33.904482 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 19:52:33.904490 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 17 19:52:33.904498 kernel: iommu: Default domain type: Translated Mar 17 19:52:33.904506 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 19:52:33.904586 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Mar 17 19:52:33.904683 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 19:52:33.904768 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Mar 17 19:52:33.904780 kernel: vgaarb: loaded Mar 17 19:52:33.904788 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 19:52:33.904799 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 19:52:33.904808 kernel: PTP clock support registered Mar 17 19:52:33.904816 kernel: PCI: Using ACPI for IRQ routing Mar 17 19:52:33.904824 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 19:52:33.904832 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 19:52:33.904840 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Mar 17 19:52:33.904848 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 19:52:33.904856 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 19:52:33.904864 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 19:52:33.904873 kernel: pnp: PnP ACPI init Mar 17 19:52:33.904955 kernel: pnp 00:03: [dma 2] Mar 17 19:52:33.904968 kernel: pnp: PnP ACPI: found 5 devices Mar 17 19:52:33.904976 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 19:52:33.904985 kernel: NET: Registered PF_INET protocol family Mar 17 19:52:33.904993 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 19:52:33.905001 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 19:52:33.905010 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 19:52:33.905020 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 19:52:33.905028 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Mar 17 19:52:33.905036 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 19:52:33.905044 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 19:52:33.905053 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 19:52:33.905061 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 19:52:33.905069 kernel: NET: Registered PF_XDP protocol family Mar 17 19:52:33.905141 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 19:52:33.905213 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 19:52:33.905288 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 19:52:33.905358 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Mar 17 19:52:33.905430 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Mar 17 19:52:33.905511 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Mar 17 19:52:33.905593 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 17 19:52:33.905689 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Mar 17 19:52:33.905702 kernel: PCI: CLS 0 bytes, default 64 Mar 17 19:52:33.905711 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 17 19:52:33.905722 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Mar 17 19:52:33.905730 kernel: Initialise system trusted keyrings Mar 17 19:52:33.905738 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 19:52:33.905747 kernel: Key type asymmetric registered Mar 17 19:52:33.905755 kernel: Asymmetric key parser 'x509' registered Mar 17 19:52:33.905763 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 19:52:33.905771 kernel: io scheduler mq-deadline registered Mar 17 19:52:33.905779 kernel: io scheduler kyber registered Mar 17 19:52:33.905797 kernel: io scheduler bfq registered Mar 17 19:52:33.905807 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 19:52:33.905816 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Mar 17 19:52:33.905824 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 17 19:52:33.905832 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 17 19:52:33.905841 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 17 19:52:33.905849 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 19:52:33.905857 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 19:52:33.905865 kernel: random: crng init done Mar 17 19:52:33.905873 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 19:52:33.905883 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 19:52:33.905891 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 19:52:33.905899 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 19:52:33.905982 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 17 19:52:33.906058 kernel: rtc_cmos 00:04: registered as rtc0 Mar 17 19:52:33.906131 kernel: rtc_cmos 00:04: setting system clock to 2025-03-17T19:52:33 UTC (1742241153) Mar 17 19:52:33.906203 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 17 19:52:33.906214 kernel: NET: Registered PF_INET6 protocol family Mar 17 19:52:33.906225 kernel: Segment Routing with IPv6 Mar 17 19:52:33.906233 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 19:52:33.906241 kernel: NET: Registered PF_PACKET protocol family Mar 17 19:52:33.906249 kernel: Key type dns_resolver registered Mar 17 19:52:33.906257 kernel: IPI shorthand broadcast: enabled Mar 17 19:52:33.906265 kernel: sched_clock: Marking stable (848216420, 168955934)->(1070294307, -53121953) Mar 17 19:52:33.906273 kernel: registered taskstats version 1 Mar 17 19:52:33.906282 kernel: Loading compiled-in X.509 certificates Mar 17 19:52:33.906290 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: d5b956bbabb2d386c0246a969032c0de9eaa8220' Mar 17 19:52:33.906299 kernel: Key type .fscrypt registered Mar 17 19:52:33.906307 kernel: Key type fscrypt-provisioning registered Mar 17 19:52:33.906316 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 19:52:33.906324 kernel: ima: Allocated hash algorithm: sha1 Mar 17 19:52:33.906332 kernel: ima: No architecture policies found Mar 17 19:52:33.906340 kernel: clk: Disabling unused clocks Mar 17 19:52:33.906348 kernel: Freeing unused kernel image (initmem) memory: 47472K Mar 17 19:52:33.906356 kernel: Write protecting the kernel read-only data: 28672k Mar 17 19:52:33.906364 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Mar 17 19:52:33.906373 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K Mar 17 19:52:33.906381 kernel: Run /init as init process Mar 17 19:52:33.906389 kernel: with arguments: Mar 17 19:52:33.906397 kernel: /init Mar 17 19:52:33.906405 kernel: with environment: Mar 17 19:52:33.906413 kernel: HOME=/ Mar 17 19:52:33.906421 kernel: TERM=linux Mar 17 19:52:33.906428 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 19:52:33.906439 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 19:52:33.906451 systemd[1]: Detected virtualization kvm. Mar 17 19:52:33.906461 systemd[1]: Detected architecture x86-64. Mar 17 19:52:33.906469 systemd[1]: Running in initrd. Mar 17 19:52:33.906478 systemd[1]: No hostname configured, using default hostname. Mar 17 19:52:33.906487 systemd[1]: Hostname set to . Mar 17 19:52:33.906496 systemd[1]: Initializing machine ID from VM UUID. Mar 17 19:52:33.906505 systemd[1]: Queued start job for default target initrd.target. Mar 17 19:52:33.906514 systemd[1]: Started systemd-ask-password-console.path. Mar 17 19:52:33.906523 systemd[1]: Reached target cryptsetup.target. Mar 17 19:52:33.906531 systemd[1]: Reached target paths.target. Mar 17 19:52:33.906540 systemd[1]: Reached target slices.target. Mar 17 19:52:33.906548 systemd[1]: Reached target swap.target. Mar 17 19:52:33.906558 systemd[1]: Reached target timers.target. Mar 17 19:52:33.906568 systemd[1]: Listening on iscsid.socket. Mar 17 19:52:33.906579 systemd[1]: Listening on iscsiuio.socket. Mar 17 19:52:33.906596 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 19:52:33.906607 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 19:52:33.906616 systemd[1]: Listening on systemd-journald.socket. Mar 17 19:52:33.906626 systemd[1]: Listening on systemd-networkd.socket. Mar 17 19:52:33.906636 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 19:52:33.906647 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 19:52:33.906657 systemd[1]: Reached target sockets.target. Mar 17 19:52:33.906681 systemd[1]: Starting kmod-static-nodes.service... Mar 17 19:52:33.906691 systemd[1]: Finished network-cleanup.service. Mar 17 19:52:33.906700 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 19:52:33.906709 systemd[1]: Starting systemd-journald.service... Mar 17 19:52:33.906719 systemd[1]: Starting systemd-modules-load.service... Mar 17 19:52:33.906728 systemd[1]: Starting systemd-resolved.service... Mar 17 19:52:33.906738 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 19:52:33.906749 systemd[1]: Finished kmod-static-nodes.service. Mar 17 19:52:33.906759 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 19:52:33.906772 systemd-journald[186]: Journal started Mar 17 19:52:33.906819 systemd-journald[186]: Runtime Journal (/run/log/journal/dfe56ab5de144b0ebb7d0ab00e338426) is 8.0M, max 78.4M, 70.4M free. Mar 17 19:52:33.899034 systemd-modules-load[187]: Inserted module 'overlay' Mar 17 19:52:33.901352 systemd-resolved[188]: Positive Trust Anchors: Mar 17 19:52:33.928500 systemd[1]: Started systemd-journald.service. Mar 17 19:52:33.928525 kernel: audit: type=1130 audit(1742241153.921:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:33.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:33.901364 systemd-resolved[188]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 19:52:33.938976 kernel: audit: type=1130 audit(1742241153.928:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:33.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:33.901401 systemd-resolved[188]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 19:52:33.945977 kernel: audit: type=1130 audit(1742241153.934:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:33.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:33.911152 systemd-resolved[188]: Defaulting to hostname 'linux'. Mar 17 19:52:33.929059 systemd[1]: Started systemd-resolved.service. Mar 17 19:52:33.935589 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 19:52:33.936132 systemd[1]: Reached target nss-lookup.target. Mar 17 19:52:33.937238 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 19:52:33.938228 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 19:52:33.956536 kernel: audit: type=1130 audit(1742241153.935:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:33.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:33.951401 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 19:52:33.965924 kernel: audit: type=1130 audit(1742241153.955:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:33.965955 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 19:52:33.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:33.966208 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 19:52:33.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:33.971717 kernel: audit: type=1130 audit(1742241153.966:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:33.967536 systemd[1]: Starting dracut-cmdline.service... Mar 17 19:52:33.978700 kernel: Bridge firewalling registered Mar 17 19:52:33.978025 systemd-modules-load[187]: Inserted module 'br_netfilter' Mar 17 19:52:33.981661 dracut-cmdline[203]: dracut-dracut-053 Mar 17 19:52:33.984785 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 19:52:34.005692 kernel: SCSI subsystem initialized Mar 17 19:52:34.024639 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 19:52:34.024686 kernel: device-mapper: uevent: version 1.0.3 Mar 17 19:52:34.027698 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 19:52:34.031634 systemd-modules-load[187]: Inserted module 'dm_multipath' Mar 17 19:52:34.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:34.033066 systemd[1]: Finished systemd-modules-load.service. Mar 17 19:52:34.040258 kernel: audit: type=1130 audit(1742241154.032:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:34.034341 systemd[1]: Starting systemd-sysctl.service... Mar 17 19:52:34.046829 systemd[1]: Finished systemd-sysctl.service. Mar 17 19:52:34.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:34.052702 kernel: audit: type=1130 audit(1742241154.046:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:34.069695 kernel: Loading iSCSI transport class v2.0-870. Mar 17 19:52:34.089692 kernel: iscsi: registered transport (tcp) Mar 17 19:52:34.115826 kernel: iscsi: registered transport (qla4xxx) Mar 17 19:52:34.115873 kernel: QLogic iSCSI HBA Driver Mar 17 19:52:34.173260 systemd[1]: Finished dracut-cmdline.service. Mar 17 19:52:34.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:34.176491 systemd[1]: Starting dracut-pre-udev.service... Mar 17 19:52:34.180790 kernel: audit: type=1130 audit(1742241154.173:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:34.241810 kernel: raid6: sse2x4 gen() 12723 MB/s Mar 17 19:52:34.259774 kernel: raid6: sse2x4 xor() 7059 MB/s Mar 17 19:52:34.277765 kernel: raid6: sse2x2 gen() 14720 MB/s Mar 17 19:52:34.295770 kernel: raid6: sse2x2 xor() 8835 MB/s Mar 17 19:52:34.313768 kernel: raid6: sse2x1 gen() 11444 MB/s Mar 17 19:52:34.335696 kernel: raid6: sse2x1 xor() 7019 MB/s Mar 17 19:52:34.335756 kernel: raid6: using algorithm sse2x2 gen() 14720 MB/s Mar 17 19:52:34.335783 kernel: raid6: .... xor() 8835 MB/s, rmw enabled Mar 17 19:52:34.336837 kernel: raid6: using ssse3x2 recovery algorithm Mar 17 19:52:34.352569 kernel: xor: measuring software checksum speed Mar 17 19:52:34.352639 kernel: prefetch64-sse : 18364 MB/sec Mar 17 19:52:34.352699 kernel: generic_sse : 16066 MB/sec Mar 17 19:52:34.355169 kernel: xor: using function: prefetch64-sse (18364 MB/sec) Mar 17 19:52:34.469734 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Mar 17 19:52:34.485579 systemd[1]: Finished dracut-pre-udev.service. Mar 17 19:52:34.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:34.485000 audit: BPF prog-id=7 op=LOAD Mar 17 19:52:34.485000 audit: BPF prog-id=8 op=LOAD Mar 17 19:52:34.487083 systemd[1]: Starting systemd-udevd.service... Mar 17 19:52:34.499881 systemd-udevd[385]: Using default interface naming scheme 'v252'. Mar 17 19:52:34.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:34.504422 systemd[1]: Started systemd-udevd.service. Mar 17 19:52:34.509632 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 19:52:34.537171 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Mar 17 19:52:34.581080 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 19:52:34.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:34.584550 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 19:52:34.625949 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 19:52:34.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:34.708696 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Mar 17 19:52:34.734105 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 19:52:34.734123 kernel: GPT:17805311 != 20971519 Mar 17 19:52:34.734134 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 19:52:34.734145 kernel: GPT:17805311 != 20971519 Mar 17 19:52:34.734161 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 19:52:34.734171 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 19:52:34.734183 kernel: libata version 3.00 loaded. Mar 17 19:52:34.734193 kernel: ata_piix 0000:00:01.1: version 2.13 Mar 17 19:52:34.737535 kernel: scsi host0: ata_piix Mar 17 19:52:34.737654 kernel: scsi host1: ata_piix Mar 17 19:52:34.737809 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Mar 17 19:52:34.737823 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Mar 17 19:52:34.758693 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (436) Mar 17 19:52:34.765038 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 19:52:34.805160 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 19:52:34.809144 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 19:52:34.809712 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 19:52:34.814571 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 19:52:34.815914 systemd[1]: Starting disk-uuid.service... Mar 17 19:52:34.829822 disk-uuid[470]: Primary Header is updated. Mar 17 19:52:34.829822 disk-uuid[470]: Secondary Entries is updated. Mar 17 19:52:34.829822 disk-uuid[470]: Secondary Header is updated. Mar 17 19:52:34.836696 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 19:52:34.842779 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 19:52:35.856720 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 19:52:35.858064 disk-uuid[471]: The operation has completed successfully. Mar 17 19:52:35.909568 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 19:52:35.910540 systemd[1]: Finished disk-uuid.service. Mar 17 19:52:35.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:35.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:35.916035 systemd[1]: Starting verity-setup.service... Mar 17 19:52:35.955747 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Mar 17 19:52:36.061931 systemd[1]: Found device dev-mapper-usr.device. Mar 17 19:52:36.066271 systemd[1]: Mounting sysusr-usr.mount... Mar 17 19:52:36.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:36.068902 systemd[1]: Finished verity-setup.service. Mar 17 19:52:36.213695 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 19:52:36.214349 systemd[1]: Mounted sysusr-usr.mount. Mar 17 19:52:36.215484 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 19:52:36.217101 systemd[1]: Starting ignition-setup.service... Mar 17 19:52:36.218887 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 19:52:36.234049 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 19:52:36.234111 kernel: BTRFS info (device vda6): using free space tree Mar 17 19:52:36.234131 kernel: BTRFS info (device vda6): has skinny extents Mar 17 19:52:36.251189 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 19:52:36.264160 systemd[1]: Finished ignition-setup.service. Mar 17 19:52:36.265437 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 19:52:36.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:36.314329 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 19:52:36.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:36.316000 audit: BPF prog-id=9 op=LOAD Mar 17 19:52:36.318656 systemd[1]: Starting systemd-networkd.service... Mar 17 19:52:36.347832 systemd-networkd[641]: lo: Link UP Mar 17 19:52:36.347844 systemd-networkd[641]: lo: Gained carrier Mar 17 19:52:36.348298 systemd-networkd[641]: Enumeration completed Mar 17 19:52:36.348925 systemd-networkd[641]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 19:52:36.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:36.351172 systemd-networkd[641]: eth0: Link UP Mar 17 19:52:36.351176 systemd-networkd[641]: eth0: Gained carrier Mar 17 19:52:36.351807 systemd[1]: Started systemd-networkd.service. Mar 17 19:52:36.356226 systemd[1]: Reached target network.target. Mar 17 19:52:36.358501 systemd[1]: Starting iscsiuio.service... Mar 17 19:52:36.370808 systemd[1]: Started iscsiuio.service. Mar 17 19:52:36.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:36.372749 systemd-networkd[641]: eth0: DHCPv4 address 172.24.4.126/24, gateway 172.24.4.1 acquired from 172.24.4.1 Mar 17 19:52:36.373400 systemd[1]: Starting iscsid.service... Mar 17 19:52:36.382201 iscsid[648]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 19:52:36.382201 iscsid[648]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 19:52:36.382201 iscsid[648]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 19:52:36.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:36.386741 iscsid[648]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 19:52:36.386741 iscsid[648]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 19:52:36.386741 iscsid[648]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 19:52:36.385110 systemd[1]: Started iscsid.service. Mar 17 19:52:36.386493 systemd[1]: Starting dracut-initqueue.service... Mar 17 19:52:36.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:36.398214 systemd[1]: Finished dracut-initqueue.service. Mar 17 19:52:36.398843 systemd[1]: Reached target remote-fs-pre.target. Mar 17 19:52:36.399286 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 19:52:36.399760 systemd[1]: Reached target remote-fs.target. Mar 17 19:52:36.401039 systemd[1]: Starting dracut-pre-mount.service... Mar 17 19:52:36.410398 systemd[1]: Finished dracut-pre-mount.service. Mar 17 19:52:36.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:36.556227 ignition[589]: Ignition 2.14.0 Mar 17 19:52:36.556254 ignition[589]: Stage: fetch-offline Mar 17 19:52:36.556373 ignition[589]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 19:52:36.556422 ignition[589]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 19:52:36.558837 ignition[589]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 19:52:36.559057 ignition[589]: parsed url from cmdline: "" Mar 17 19:52:36.562179 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 19:52:36.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:36.559070 ignition[589]: no config URL provided Mar 17 19:52:36.559084 ignition[589]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 19:52:36.566887 systemd[1]: Starting ignition-fetch.service... Mar 17 19:52:36.559103 ignition[589]: no config at "/usr/lib/ignition/user.ign" Mar 17 19:52:36.559115 ignition[589]: failed to fetch config: resource requires networking Mar 17 19:52:36.559602 ignition[589]: Ignition finished successfully Mar 17 19:52:36.587789 ignition[664]: Ignition 2.14.0 Mar 17 19:52:36.587809 ignition[664]: Stage: fetch Mar 17 19:52:36.588053 ignition[664]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 19:52:36.588095 ignition[664]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 19:52:36.590169 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 19:52:36.590386 ignition[664]: parsed url from cmdline: "" Mar 17 19:52:36.590426 ignition[664]: no config URL provided Mar 17 19:52:36.590441 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 19:52:36.590462 ignition[664]: no config at "/usr/lib/ignition/user.ign" Mar 17 19:52:36.595846 ignition[664]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 17 19:52:36.595904 ignition[664]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 17 19:52:36.597660 ignition[664]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Mar 17 19:52:36.871479 ignition[664]: GET result: OK Mar 17 19:52:36.871591 ignition[664]: parsing config with SHA512: aee3b9ec53a49df9c617acc93264daa9dc229692de1413db95a13ed7428e5552f1929ab7a0d516a8a0ff45177733d968f6b42fb4a1aa4577417cced7fe1a5b3a Mar 17 19:52:36.889628 unknown[664]: fetched base config from "system" Mar 17 19:52:36.889663 unknown[664]: fetched base config from "system" Mar 17 19:52:36.890486 ignition[664]: fetch: fetch complete Mar 17 19:52:36.889741 unknown[664]: fetched user config from "openstack" Mar 17 19:52:36.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:36.890499 ignition[664]: fetch: fetch passed Mar 17 19:52:36.894134 systemd[1]: Finished ignition-fetch.service. Mar 17 19:52:36.890597 ignition[664]: Ignition finished successfully Mar 17 19:52:36.896386 systemd[1]: Starting ignition-kargs.service... Mar 17 19:52:36.926060 ignition[670]: Ignition 2.14.0 Mar 17 19:52:36.926085 ignition[670]: Stage: kargs Mar 17 19:52:36.926355 ignition[670]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 19:52:36.926400 ignition[670]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 19:52:36.928775 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 19:52:36.930874 ignition[670]: kargs: kargs passed Mar 17 19:52:36.930973 ignition[670]: Ignition finished successfully Mar 17 19:52:36.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:36.933652 systemd[1]: Finished ignition-kargs.service. Mar 17 19:52:36.937813 systemd[1]: Starting ignition-disks.service... Mar 17 19:52:36.952485 ignition[675]: Ignition 2.14.0 Mar 17 19:52:36.952499 ignition[675]: Stage: disks Mar 17 19:52:36.952638 ignition[675]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 19:52:36.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:36.957489 systemd[1]: Finished ignition-disks.service. Mar 17 19:52:36.952661 ignition[675]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 19:52:36.959716 systemd[1]: Reached target initrd-root-device.target. Mar 17 19:52:36.953821 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 19:52:36.961924 systemd[1]: Reached target local-fs-pre.target. Mar 17 19:52:36.955220 ignition[675]: disks: disks passed Mar 17 19:52:36.964148 systemd[1]: Reached target local-fs.target. Mar 17 19:52:36.955284 ignition[675]: Ignition finished successfully Mar 17 19:52:36.966263 systemd[1]: Reached target sysinit.target. Mar 17 19:52:36.968583 systemd[1]: Reached target basic.target. Mar 17 19:52:36.972569 systemd[1]: Starting systemd-fsck-root.service... Mar 17 19:52:37.003370 systemd-fsck[682]: ROOT: clean, 623/1628000 files, 124059/1617920 blocks Mar 17 19:52:37.013187 systemd[1]: Finished systemd-fsck-root.service. Mar 17 19:52:37.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:37.014781 systemd[1]: Mounting sysroot.mount... Mar 17 19:52:37.035769 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 19:52:37.036148 systemd[1]: Mounted sysroot.mount. Mar 17 19:52:37.036725 systemd[1]: Reached target initrd-root-fs.target. Mar 17 19:52:37.039917 systemd[1]: Mounting sysroot-usr.mount... Mar 17 19:52:37.040749 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Mar 17 19:52:37.041429 systemd[1]: Starting flatcar-openstack-hostname.service... Mar 17 19:52:37.044944 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 19:52:37.044972 systemd[1]: Reached target ignition-diskful.target. Mar 17 19:52:37.048459 systemd[1]: Mounted sysroot-usr.mount. Mar 17 19:52:37.051719 systemd[1]: Starting initrd-setup-root.service... Mar 17 19:52:37.061109 initrd-setup-root[693]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 19:52:37.084634 initrd-setup-root[701]: cut: /sysroot/etc/group: No such file or directory Mar 17 19:52:37.100336 initrd-setup-root[709]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 19:52:37.104759 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 19:52:37.125689 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (712) Mar 17 19:52:37.126393 initrd-setup-root[718]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 19:52:37.132749 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 19:52:37.132852 kernel: BTRFS info (device vda6): using free space tree Mar 17 19:52:37.132881 kernel: BTRFS info (device vda6): has skinny extents Mar 17 19:52:37.154813 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 19:52:37.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:37.210395 systemd[1]: Finished initrd-setup-root.service. Mar 17 19:52:37.211796 systemd[1]: Starting ignition-mount.service... Mar 17 19:52:37.213520 systemd[1]: Starting sysroot-boot.service... Mar 17 19:52:37.229811 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Mar 17 19:52:37.229935 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Mar 17 19:52:37.251922 ignition[757]: INFO : Ignition 2.14.0 Mar 17 19:52:37.252734 ignition[757]: INFO : Stage: mount Mar 17 19:52:37.253380 ignition[757]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 19:52:37.254193 ignition[757]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 19:52:37.256273 ignition[757]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 19:52:37.259907 ignition[757]: INFO : mount: mount passed Mar 17 19:52:37.260460 ignition[757]: INFO : Ignition finished successfully Mar 17 19:52:37.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:37.261820 systemd[1]: Finished ignition-mount.service. Mar 17 19:52:37.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:37.268642 systemd[1]: Finished sysroot-boot.service. Mar 17 19:52:37.272061 coreos-metadata[688]: Mar 17 19:52:37.272 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 17 19:52:37.289182 coreos-metadata[688]: Mar 17 19:52:37.289 INFO Fetch successful Mar 17 19:52:37.289787 coreos-metadata[688]: Mar 17 19:52:37.289 INFO wrote hostname ci-3510-3-7-f-d5e02b2809.novalocal to /sysroot/etc/hostname Mar 17 19:52:37.293964 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Mar 17 19:52:37.294062 systemd[1]: Finished flatcar-openstack-hostname.service. Mar 17 19:52:37.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:37.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:37.296272 systemd[1]: Starting ignition-files.service... Mar 17 19:52:37.303493 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 19:52:37.313706 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (767) Mar 17 19:52:37.317695 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 19:52:37.317728 kernel: BTRFS info (device vda6): using free space tree Mar 17 19:52:37.317739 kernel: BTRFS info (device vda6): has skinny extents Mar 17 19:52:37.329226 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 19:52:37.350547 ignition[786]: INFO : Ignition 2.14.0 Mar 17 19:52:37.350547 ignition[786]: INFO : Stage: files Mar 17 19:52:37.351709 ignition[786]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 19:52:37.351709 ignition[786]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 19:52:37.353444 ignition[786]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 19:52:37.356530 ignition[786]: DEBUG : files: compiled without relabeling support, skipping Mar 17 19:52:37.357516 ignition[786]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 19:52:37.357516 ignition[786]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 19:52:37.365188 ignition[786]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 19:52:37.366170 ignition[786]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 19:52:37.367902 unknown[786]: wrote ssh authorized keys file for user: core Mar 17 19:52:37.369946 ignition[786]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 19:52:37.369946 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Mar 17 19:52:37.369946 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 19:52:37.369946 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 19:52:37.369946 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 19:52:37.369946 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 19:52:37.369946 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 19:52:37.369946 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 19:52:37.369946 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 17 19:52:37.957604 systemd-networkd[641]: eth0: Gained IPv6LL Mar 17 19:52:37.962444 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Mar 17 19:52:39.605140 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 19:52:39.605140 ignition[786]: INFO : files: op(7): [started] processing unit "coreos-metadata-sshkeys@.service" Mar 17 19:52:39.605140 ignition[786]: INFO : files: op(7): [finished] processing unit "coreos-metadata-sshkeys@.service" Mar 17 19:52:39.605140 ignition[786]: INFO : files: op(8): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 19:52:39.637820 kernel: kauditd_printk_skb: 27 callbacks suppressed Mar 17 19:52:39.637871 kernel: audit: type=1130 audit(1742241159.621:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.612889 systemd[1]: Finished ignition-files.service. Mar 17 19:52:39.640824 ignition[786]: INFO : files: op(8): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 19:52:39.640824 ignition[786]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 19:52:39.640824 ignition[786]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 19:52:39.640824 ignition[786]: INFO : files: files passed Mar 17 19:52:39.640824 ignition[786]: INFO : Ignition finished successfully Mar 17 19:52:39.662089 kernel: audit: type=1130 audit(1742241159.645:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.662113 kernel: audit: type=1130 audit(1742241159.651:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.662126 kernel: audit: type=1131 audit(1742241159.651:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.622627 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 19:52:39.638228 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 19:52:39.664687 initrd-setup-root-after-ignition[809]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 19:52:39.639058 systemd[1]: Starting ignition-quench.service... Mar 17 19:52:39.644188 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 19:52:39.646156 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 19:52:39.646330 systemd[1]: Finished ignition-quench.service. Mar 17 19:52:39.652530 systemd[1]: Reached target ignition-complete.target. Mar 17 19:52:39.664452 systemd[1]: Starting initrd-parse-etc.service... Mar 17 19:52:39.687787 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 19:52:39.689493 systemd[1]: Finished initrd-parse-etc.service. Mar 17 19:52:39.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.695785 systemd[1]: Reached target initrd-fs.target. Mar 17 19:52:39.715973 kernel: audit: type=1130 audit(1742241159.691:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.716017 kernel: audit: type=1131 audit(1742241159.695:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.713936 systemd[1]: Reached target initrd.target. Mar 17 19:52:39.714434 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 19:52:39.715166 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 19:52:39.730071 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 19:52:39.735980 kernel: audit: type=1130 audit(1742241159.729:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.731267 systemd[1]: Starting initrd-cleanup.service... Mar 17 19:52:39.744384 systemd[1]: Stopped target nss-lookup.target. Mar 17 19:52:39.745474 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 19:52:39.746576 systemd[1]: Stopped target timers.target. Mar 17 19:52:39.747588 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 19:52:39.748305 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 19:52:39.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.749680 systemd[1]: Stopped target initrd.target. Mar 17 19:52:39.755125 kernel: audit: type=1131 audit(1742241159.748:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.755691 systemd[1]: Stopped target basic.target. Mar 17 19:52:39.756348 systemd[1]: Stopped target ignition-complete.target. Mar 17 19:52:39.757433 systemd[1]: Stopped target ignition-diskful.target. Mar 17 19:52:39.758611 systemd[1]: Stopped target initrd-root-device.target. Mar 17 19:52:39.759815 systemd[1]: Stopped target remote-fs.target. Mar 17 19:52:39.760902 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 19:52:39.762023 systemd[1]: Stopped target sysinit.target. Mar 17 19:52:39.763169 systemd[1]: Stopped target local-fs.target. Mar 17 19:52:39.764289 systemd[1]: Stopped target local-fs-pre.target. Mar 17 19:52:39.765463 systemd[1]: Stopped target swap.target. Mar 17 19:52:39.766605 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 19:52:39.773313 kernel: audit: type=1131 audit(1742241159.767:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.766791 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 19:52:39.767914 systemd[1]: Stopped target cryptsetup.target. Mar 17 19:52:39.780819 kernel: audit: type=1131 audit(1742241159.774:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.773947 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 19:52:39.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.774109 systemd[1]: Stopped dracut-initqueue.service. Mar 17 19:52:39.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.775321 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 19:52:39.775491 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 19:52:39.781495 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 19:52:39.781640 systemd[1]: Stopped ignition-files.service. Mar 17 19:52:39.783778 systemd[1]: Stopping ignition-mount.service... Mar 17 19:52:39.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.790701 systemd[1]: Stopping iscsiuio.service... Mar 17 19:52:39.791331 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 19:52:39.791512 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 19:52:39.794409 systemd[1]: Stopping sysroot-boot.service... Mar 17 19:52:39.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.795069 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 19:52:39.795282 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 19:52:39.796172 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 19:52:39.796825 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 19:52:39.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.804663 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 17 19:52:39.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.804805 systemd[1]: Stopped iscsiuio.service. Mar 17 19:52:39.807224 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 19:52:39.807329 systemd[1]: Finished initrd-cleanup.service. Mar 17 19:52:39.812769 ignition[824]: INFO : Ignition 2.14.0 Mar 17 19:52:39.812769 ignition[824]: INFO : Stage: umount Mar 17 19:52:39.812769 ignition[824]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 19:52:39.812769 ignition[824]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 19:52:39.818516 ignition[824]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 19:52:39.819573 ignition[824]: INFO : umount: umount passed Mar 17 19:52:39.819573 ignition[824]: INFO : Ignition finished successfully Mar 17 19:52:39.821055 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 19:52:39.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.821164 systemd[1]: Stopped ignition-mount.service. Mar 17 19:52:39.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.822111 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 19:52:39.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.822170 systemd[1]: Stopped ignition-disks.service. Mar 17 19:52:39.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.823282 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 19:52:39.823338 systemd[1]: Stopped ignition-kargs.service. Mar 17 19:52:39.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.824510 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 19:52:39.824563 systemd[1]: Stopped ignition-fetch.service. Mar 17 19:52:39.825630 systemd[1]: Stopped target network.target. Mar 17 19:52:39.826799 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 19:52:39.826860 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 19:52:39.828035 systemd[1]: Stopped target paths.target. Mar 17 19:52:39.829085 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 19:52:39.832730 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 19:52:39.838276 systemd[1]: Stopped target slices.target. Mar 17 19:52:39.839565 systemd[1]: Stopped target sockets.target. Mar 17 19:52:39.841058 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 19:52:39.841093 systemd[1]: Closed iscsid.socket. Mar 17 19:52:39.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.842059 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 19:52:39.842106 systemd[1]: Closed iscsiuio.socket. Mar 17 19:52:39.843100 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 19:52:39.843157 systemd[1]: Stopped ignition-setup.service. Mar 17 19:52:39.844515 systemd[1]: Stopping systemd-networkd.service... Mar 17 19:52:39.845449 systemd[1]: Stopping systemd-resolved.service... Mar 17 19:52:39.847785 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 19:52:39.848730 systemd-networkd[641]: eth0: DHCPv6 lease lost Mar 17 19:52:39.854081 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 19:52:39.854241 systemd[1]: Stopped systemd-networkd.service. Mar 17 19:52:39.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.856349 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 19:52:39.856476 systemd[1]: Stopped systemd-resolved.service. Mar 17 19:52:39.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.858843 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 19:52:39.858956 systemd[1]: Stopped sysroot-boot.service. Mar 17 19:52:39.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.859000 audit: BPF prog-id=9 op=UNLOAD Mar 17 19:52:39.859000 audit: BPF prog-id=6 op=UNLOAD Mar 17 19:52:39.860235 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 19:52:39.860295 systemd[1]: Closed systemd-networkd.socket. Mar 17 19:52:39.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.861149 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 19:52:39.861202 systemd[1]: Stopped initrd-setup-root.service. Mar 17 19:52:39.863058 systemd[1]: Stopping network-cleanup.service... Mar 17 19:52:39.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.865259 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 19:52:39.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.865333 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 19:52:39.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.866469 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 19:52:39.866541 systemd[1]: Stopped systemd-sysctl.service. Mar 17 19:52:39.868116 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 19:52:39.868166 systemd[1]: Stopped systemd-modules-load.service. Mar 17 19:52:39.869124 systemd[1]: Stopping systemd-udevd.service... Mar 17 19:52:39.876022 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 19:52:39.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.879244 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 19:52:39.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.879440 systemd[1]: Stopped systemd-udevd.service. Mar 17 19:52:39.881129 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 19:52:39.881253 systemd[1]: Stopped network-cleanup.service. Mar 17 19:52:39.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.882574 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 19:52:39.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.882623 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 19:52:39.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.883654 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 19:52:39.883725 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 19:52:39.884642 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 19:52:39.884726 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 19:52:39.885775 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 19:52:39.885830 systemd[1]: Stopped dracut-cmdline.service. Mar 17 19:52:39.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.886821 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 19:52:39.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:39.886873 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 19:52:39.888855 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 19:52:39.895724 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 19:52:39.895799 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 19:52:39.897228 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 19:52:39.897342 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 19:52:39.898181 systemd[1]: Reached target initrd-switch-root.target. Mar 17 19:52:39.900006 systemd[1]: Starting initrd-switch-root.service... Mar 17 19:52:39.915366 systemd[1]: Switching root. Mar 17 19:52:39.935449 iscsid[648]: iscsid shutting down. Mar 17 19:52:39.936314 systemd-journald[186]: Received SIGTERM from PID 1 (n/a). Mar 17 19:52:39.936389 systemd-journald[186]: Journal stopped Mar 17 19:52:44.194279 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 19:52:44.194325 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 19:52:44.194345 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 19:52:44.194357 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 19:52:44.194368 kernel: SELinux: policy capability open_perms=1 Mar 17 19:52:44.194382 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 19:52:44.194395 kernel: SELinux: policy capability always_check_network=0 Mar 17 19:52:44.194406 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 19:52:44.194420 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 19:52:44.194431 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 19:52:44.194530 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 19:52:44.194548 systemd[1]: Successfully loaded SELinux policy in 94.098ms. Mar 17 19:52:44.194565 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.979ms. Mar 17 19:52:44.194579 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 19:52:44.194591 systemd[1]: Detected virtualization kvm. Mar 17 19:52:44.194604 systemd[1]: Detected architecture x86-64. Mar 17 19:52:44.194616 systemd[1]: Detected first boot. Mar 17 19:52:44.194627 systemd[1]: Hostname set to . Mar 17 19:52:44.194639 systemd[1]: Initializing machine ID from VM UUID. Mar 17 19:52:44.194731 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 19:52:44.194744 systemd[1]: Populated /etc with preset unit settings. Mar 17 19:52:44.194756 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 19:52:44.194769 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 19:52:44.194781 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 19:52:44.194793 systemd[1]: iscsid.service: Deactivated successfully. Mar 17 19:52:44.194808 systemd[1]: Stopped iscsid.service. Mar 17 19:52:44.194819 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 19:52:44.194830 systemd[1]: Stopped initrd-switch-root.service. Mar 17 19:52:44.194842 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 19:52:44.194853 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 19:52:44.194864 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 19:52:44.194876 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Mar 17 19:52:44.194887 systemd[1]: Created slice system-getty.slice. Mar 17 19:52:44.194900 systemd[1]: Created slice system-modprobe.slice. Mar 17 19:52:44.194912 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 19:52:44.194923 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 19:52:44.194935 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 19:52:44.194946 systemd[1]: Created slice user.slice. Mar 17 19:52:44.194958 systemd[1]: Started systemd-ask-password-console.path. Mar 17 19:52:44.194971 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 19:52:44.194983 systemd[1]: Set up automount boot.automount. Mar 17 19:52:44.194994 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 19:52:44.195005 systemd[1]: Stopped target initrd-switch-root.target. Mar 17 19:52:44.195016 systemd[1]: Stopped target initrd-fs.target. Mar 17 19:52:44.195027 systemd[1]: Stopped target initrd-root-fs.target. Mar 17 19:52:44.195040 systemd[1]: Reached target integritysetup.target. Mar 17 19:52:44.195052 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 19:52:44.195063 systemd[1]: Reached target remote-fs.target. Mar 17 19:52:44.195081 systemd[1]: Reached target slices.target. Mar 17 19:52:44.195093 systemd[1]: Reached target swap.target. Mar 17 19:52:44.195104 systemd[1]: Reached target torcx.target. Mar 17 19:52:44.195116 systemd[1]: Reached target veritysetup.target. Mar 17 19:52:44.195127 systemd[1]: Listening on systemd-coredump.socket. Mar 17 19:52:44.195138 systemd[1]: Listening on systemd-initctl.socket. Mar 17 19:52:44.195282 systemd[1]: Listening on systemd-networkd.socket. Mar 17 19:52:44.195299 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 19:52:44.195311 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 19:52:44.195322 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 19:52:44.195333 systemd[1]: Mounting dev-hugepages.mount... Mar 17 19:52:44.195345 systemd[1]: Mounting dev-mqueue.mount... Mar 17 19:52:44.195356 systemd[1]: Mounting media.mount... Mar 17 19:52:44.195368 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 19:52:44.195379 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 19:52:44.195394 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 19:52:44.195407 systemd[1]: Mounting tmp.mount... Mar 17 19:52:44.195418 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 19:52:44.195501 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 19:52:44.195517 systemd[1]: Starting kmod-static-nodes.service... Mar 17 19:52:44.195528 systemd[1]: Starting modprobe@configfs.service... Mar 17 19:52:44.195540 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 19:52:44.195551 systemd[1]: Starting modprobe@drm.service... Mar 17 19:52:44.195562 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 19:52:44.195574 systemd[1]: Starting modprobe@fuse.service... Mar 17 19:52:44.195587 systemd[1]: Starting modprobe@loop.service... Mar 17 19:52:44.195599 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 19:52:44.195611 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 19:52:44.195622 systemd[1]: Stopped systemd-fsck-root.service. Mar 17 19:52:44.195633 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 19:52:44.195644 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 19:52:44.195656 systemd[1]: Stopped systemd-journald.service. Mar 17 19:52:44.195680 kernel: fuse: init (API version 7.34) Mar 17 19:52:44.195695 kernel: loop: module loaded Mar 17 19:52:44.195708 systemd[1]: Starting systemd-journald.service... Mar 17 19:52:44.195719 systemd[1]: Starting systemd-modules-load.service... Mar 17 19:52:44.195731 systemd[1]: Starting systemd-network-generator.service... Mar 17 19:52:44.195745 systemd[1]: Starting systemd-remount-fs.service... Mar 17 19:52:44.195756 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 19:52:44.195768 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 19:52:44.195782 systemd-journald[951]: Journal started Mar 17 19:52:44.195824 systemd-journald[951]: Runtime Journal (/run/log/journal/dfe56ab5de144b0ebb7d0ab00e338426) is 8.0M, max 78.4M, 70.4M free. Mar 17 19:52:40.232000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 19:52:40.349000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 19:52:40.349000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 19:52:40.349000 audit: BPF prog-id=10 op=LOAD Mar 17 19:52:40.349000 audit: BPF prog-id=10 op=UNLOAD Mar 17 19:52:40.349000 audit: BPF prog-id=11 op=LOAD Mar 17 19:52:40.349000 audit: BPF prog-id=11 op=UNLOAD Mar 17 19:52:40.519000 audit[857]: AVC avc: denied { associate } for pid=857 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 19:52:40.519000 audit[857]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178cc a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=840 pid=857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 19:52:40.519000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 19:52:40.523000 audit[857]: AVC avc: denied { associate } for pid=857 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 17 19:52:40.523000 audit[857]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179a5 a2=1ed a3=0 items=2 ppid=840 pid=857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 19:52:40.523000 audit: CWD cwd="/" Mar 17 19:52:40.523000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:40.523000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:40.523000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 19:52:43.974000 audit: BPF prog-id=12 op=LOAD Mar 17 19:52:43.974000 audit: BPF prog-id=3 op=UNLOAD Mar 17 19:52:43.974000 audit: BPF prog-id=13 op=LOAD Mar 17 19:52:43.974000 audit: BPF prog-id=14 op=LOAD Mar 17 19:52:43.974000 audit: BPF prog-id=4 op=UNLOAD Mar 17 19:52:43.974000 audit: BPF prog-id=5 op=UNLOAD Mar 17 19:52:43.975000 audit: BPF prog-id=15 op=LOAD Mar 17 19:52:43.975000 audit: BPF prog-id=12 op=UNLOAD Mar 17 19:52:43.975000 audit: BPF prog-id=16 op=LOAD Mar 17 19:52:43.975000 audit: BPF prog-id=17 op=LOAD Mar 17 19:52:43.975000 audit: BPF prog-id=13 op=UNLOAD Mar 17 19:52:43.975000 audit: BPF prog-id=14 op=UNLOAD Mar 17 19:52:43.976000 audit: BPF prog-id=18 op=LOAD Mar 17 19:52:43.976000 audit: BPF prog-id=15 op=UNLOAD Mar 17 19:52:43.976000 audit: BPF prog-id=19 op=LOAD Mar 17 19:52:43.976000 audit: BPF prog-id=20 op=LOAD Mar 17 19:52:43.976000 audit: BPF prog-id=16 op=UNLOAD Mar 17 19:52:43.976000 audit: BPF prog-id=17 op=UNLOAD Mar 17 19:52:43.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:43.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:43.985000 audit: BPF prog-id=18 op=UNLOAD Mar 17 19:52:43.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:43.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.169000 audit: BPF prog-id=21 op=LOAD Mar 17 19:52:44.169000 audit: BPF prog-id=22 op=LOAD Mar 17 19:52:44.169000 audit: BPF prog-id=23 op=LOAD Mar 17 19:52:44.169000 audit: BPF prog-id=19 op=UNLOAD Mar 17 19:52:44.169000 audit: BPF prog-id=20 op=UNLOAD Mar 17 19:52:44.191000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 19:52:44.191000 audit[951]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7fff249ab4e0 a2=4000 a3=7fff249ab57c items=0 ppid=1 pid=951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 19:52:44.191000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 19:52:43.972405 systemd[1]: Queued start job for default target multi-user.target. Mar 17 19:52:44.200157 systemd[1]: Stopped verity-setup.service. Mar 17 19:52:44.200180 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 19:52:44.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:40.514573 /usr/lib/systemd/system-generators/torcx-generator[857]: time="2025-03-17T19:52:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 19:52:43.972425 systemd[1]: Unnecessary job was removed for dev-vda6.device. Mar 17 19:52:40.517483 /usr/lib/systemd/system-generators/torcx-generator[857]: time="2025-03-17T19:52:40Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 19:52:43.978130 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 19:52:40.517506 /usr/lib/systemd/system-generators/torcx-generator[857]: time="2025-03-17T19:52:40Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 19:52:40.517539 /usr/lib/systemd/system-generators/torcx-generator[857]: time="2025-03-17T19:52:40Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Mar 17 19:52:40.517551 /usr/lib/systemd/system-generators/torcx-generator[857]: time="2025-03-17T19:52:40Z" level=debug msg="skipped missing lower profile" missing profile=oem Mar 17 19:52:40.517585 /usr/lib/systemd/system-generators/torcx-generator[857]: time="2025-03-17T19:52:40Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Mar 17 19:52:40.517601 /usr/lib/systemd/system-generators/torcx-generator[857]: time="2025-03-17T19:52:40Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Mar 17 19:52:40.517864 /usr/lib/systemd/system-generators/torcx-generator[857]: time="2025-03-17T19:52:40Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Mar 17 19:52:40.517906 /usr/lib/systemd/system-generators/torcx-generator[857]: time="2025-03-17T19:52:40Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 19:52:40.517921 /usr/lib/systemd/system-generators/torcx-generator[857]: time="2025-03-17T19:52:40Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 19:52:40.519628 /usr/lib/systemd/system-generators/torcx-generator[857]: time="2025-03-17T19:52:40Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Mar 17 19:52:40.519683 /usr/lib/systemd/system-generators/torcx-generator[857]: time="2025-03-17T19:52:40Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Mar 17 19:52:40.519707 /usr/lib/systemd/system-generators/torcx-generator[857]: time="2025-03-17T19:52:40Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Mar 17 19:52:40.519729 /usr/lib/systemd/system-generators/torcx-generator[857]: time="2025-03-17T19:52:40Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Mar 17 19:52:40.519748 /usr/lib/systemd/system-generators/torcx-generator[857]: time="2025-03-17T19:52:40Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Mar 17 19:52:40.519764 /usr/lib/systemd/system-generators/torcx-generator[857]: time="2025-03-17T19:52:40Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Mar 17 19:52:43.443516 /usr/lib/systemd/system-generators/torcx-generator[857]: time="2025-03-17T19:52:43Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 19:52:43.443809 /usr/lib/systemd/system-generators/torcx-generator[857]: time="2025-03-17T19:52:43Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 19:52:43.443927 /usr/lib/systemd/system-generators/torcx-generator[857]: time="2025-03-17T19:52:43Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 19:52:43.444119 /usr/lib/systemd/system-generators/torcx-generator[857]: time="2025-03-17T19:52:43Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 19:52:43.444179 /usr/lib/systemd/system-generators/torcx-generator[857]: time="2025-03-17T19:52:43Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Mar 17 19:52:43.444253 /usr/lib/systemd/system-generators/torcx-generator[857]: time="2025-03-17T19:52:43Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Mar 17 19:52:44.209716 systemd[1]: Started systemd-journald.service. Mar 17 19:52:44.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.207395 systemd[1]: Mounted dev-hugepages.mount. Mar 17 19:52:44.207902 systemd[1]: Mounted dev-mqueue.mount. Mar 17 19:52:44.208380 systemd[1]: Mounted media.mount. Mar 17 19:52:44.208870 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 19:52:44.209362 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 19:52:44.209920 systemd[1]: Mounted tmp.mount. Mar 17 19:52:44.210698 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 19:52:44.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.211616 systemd[1]: Finished kmod-static-nodes.service. Mar 17 19:52:44.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.212336 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 19:52:44.212487 systemd[1]: Finished modprobe@configfs.service. Mar 17 19:52:44.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.213200 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 19:52:44.213337 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 19:52:44.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.214143 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 19:52:44.214314 systemd[1]: Finished modprobe@drm.service. Mar 17 19:52:44.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.214964 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 19:52:44.215095 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 19:52:44.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.215826 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 19:52:44.215934 systemd[1]: Finished modprobe@fuse.service. Mar 17 19:52:44.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.216568 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 19:52:44.216700 systemd[1]: Finished modprobe@loop.service. Mar 17 19:52:44.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.217418 systemd[1]: Finished systemd-modules-load.service. Mar 17 19:52:44.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.218267 systemd[1]: Finished systemd-network-generator.service. Mar 17 19:52:44.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.219130 systemd[1]: Finished systemd-remount-fs.service. Mar 17 19:52:44.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.219989 systemd[1]: Reached target network-pre.target. Mar 17 19:52:44.221495 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 19:52:44.223360 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 19:52:44.226081 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 19:52:44.228934 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 19:52:44.234319 systemd[1]: Starting systemd-journal-flush.service... Mar 17 19:52:44.234902 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 19:52:44.235941 systemd[1]: Starting systemd-random-seed.service... Mar 17 19:52:44.236493 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 19:52:44.238282 systemd[1]: Starting systemd-sysctl.service... Mar 17 19:52:44.240352 systemd[1]: Starting systemd-sysusers.service... Mar 17 19:52:44.242289 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 19:52:44.244867 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 19:52:44.245901 systemd-journald[951]: Time spent on flushing to /var/log/journal/dfe56ab5de144b0ebb7d0ab00e338426 is 41.725ms for 1091 entries. Mar 17 19:52:44.245901 systemd-journald[951]: System Journal (/var/log/journal/dfe56ab5de144b0ebb7d0ab00e338426) is 8.0M, max 584.8M, 576.8M free. Mar 17 19:52:44.305454 systemd-journald[951]: Received client request to flush runtime journal. Mar 17 19:52:44.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.260118 systemd[1]: Finished systemd-random-seed.service. Mar 17 19:52:44.306311 udevadm[966]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 19:52:44.260790 systemd[1]: Reached target first-boot-complete.target. Mar 17 19:52:44.274444 systemd[1]: Finished systemd-sysctl.service. Mar 17 19:52:44.289045 systemd[1]: Finished systemd-sysusers.service. Mar 17 19:52:44.294562 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 19:52:44.296158 systemd[1]: Starting systemd-udev-settle.service... Mar 17 19:52:44.306217 systemd[1]: Finished systemd-journal-flush.service. Mar 17 19:52:44.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.828634 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 19:52:44.847429 kernel: kauditd_printk_skb: 106 callbacks suppressed Mar 17 19:52:44.847590 kernel: audit: type=1130 audit(1742241164.829:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.833000 audit: BPF prog-id=24 op=LOAD Mar 17 19:52:44.851310 systemd[1]: Starting systemd-udevd.service... Mar 17 19:52:44.846000 audit: BPF prog-id=25 op=LOAD Mar 17 19:52:44.846000 audit: BPF prog-id=7 op=UNLOAD Mar 17 19:52:44.846000 audit: BPF prog-id=8 op=UNLOAD Mar 17 19:52:44.860281 kernel: audit: type=1334 audit(1742241164.833:146): prog-id=24 op=LOAD Mar 17 19:52:44.860368 kernel: audit: type=1334 audit(1742241164.846:147): prog-id=25 op=LOAD Mar 17 19:52:44.860409 kernel: audit: type=1334 audit(1742241164.846:148): prog-id=7 op=UNLOAD Mar 17 19:52:44.860448 kernel: audit: type=1334 audit(1742241164.846:149): prog-id=8 op=UNLOAD Mar 17 19:52:44.890594 systemd-udevd[968]: Using default interface naming scheme 'v252'. Mar 17 19:52:44.931879 systemd[1]: Started systemd-udevd.service. Mar 17 19:52:44.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.949702 kernel: audit: type=1130 audit(1742241164.936:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:44.952000 audit: BPF prog-id=26 op=LOAD Mar 17 19:52:44.960011 kernel: audit: type=1334 audit(1742241164.952:151): prog-id=26 op=LOAD Mar 17 19:52:44.960189 systemd[1]: Starting systemd-networkd.service... Mar 17 19:52:44.968000 audit: BPF prog-id=27 op=LOAD Mar 17 19:52:44.974709 kernel: audit: type=1334 audit(1742241164.968:152): prog-id=27 op=LOAD Mar 17 19:52:44.974909 systemd[1]: Starting systemd-userdbd.service... Mar 17 19:52:44.969000 audit: BPF prog-id=28 op=LOAD Mar 17 19:52:44.980693 kernel: audit: type=1334 audit(1742241164.969:153): prog-id=28 op=LOAD Mar 17 19:52:44.969000 audit: BPF prog-id=29 op=LOAD Mar 17 19:52:44.985716 kernel: audit: type=1334 audit(1742241164.969:154): prog-id=29 op=LOAD Mar 17 19:52:44.991801 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Mar 17 19:52:45.050367 systemd[1]: Started systemd-userdbd.service. Mar 17 19:52:45.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:45.060711 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 17 19:52:45.067692 kernel: ACPI: button: Power Button [PWRF] Mar 17 19:52:45.110000 audit[970]: AVC avc: denied { confidentiality } for pid=970 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Mar 17 19:52:45.110000 audit[970]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55e03fd5e600 a1=338ac a2=7f4ec71debc5 a3=5 items=110 ppid=968 pid=970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 19:52:45.110000 audit: CWD cwd="/" Mar 17 19:52:45.110000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=1 name=(null) inode=14377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=2 name=(null) inode=14377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=3 name=(null) inode=14378 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=4 name=(null) inode=14377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=5 name=(null) inode=14379 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=6 name=(null) inode=14377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=7 name=(null) inode=14380 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=8 name=(null) inode=14380 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=9 name=(null) inode=14381 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=10 name=(null) inode=14380 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=11 name=(null) inode=14382 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=12 name=(null) inode=14380 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=13 name=(null) inode=14383 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=14 name=(null) inode=14380 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=15 name=(null) inode=14384 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=16 name=(null) inode=14380 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=17 name=(null) inode=14385 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=18 name=(null) inode=14377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=19 name=(null) inode=14386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=20 name=(null) inode=14386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=21 name=(null) inode=14387 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=22 name=(null) inode=14386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=23 name=(null) inode=14388 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=24 name=(null) inode=14386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=25 name=(null) inode=14389 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=26 name=(null) inode=14386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=27 name=(null) inode=14390 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=28 name=(null) inode=14386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=29 name=(null) inode=14391 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=30 name=(null) inode=14377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=31 name=(null) inode=14392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=32 name=(null) inode=14392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=33 name=(null) inode=14393 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=34 name=(null) inode=14392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=35 name=(null) inode=14394 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=36 name=(null) inode=14392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=37 name=(null) inode=14395 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=38 name=(null) inode=14392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=39 name=(null) inode=14396 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=40 name=(null) inode=14392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=41 name=(null) inode=14397 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=42 name=(null) inode=14377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=43 name=(null) inode=14398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=44 name=(null) inode=14398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=45 name=(null) inode=14399 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=46 name=(null) inode=14398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=47 name=(null) inode=14400 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=48 name=(null) inode=14398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=49 name=(null) inode=14401 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=50 name=(null) inode=14398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=51 name=(null) inode=14402 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=52 name=(null) inode=14398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=53 name=(null) inode=14403 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=55 name=(null) inode=14404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=56 name=(null) inode=14404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=57 name=(null) inode=14405 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=58 name=(null) inode=14404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=59 name=(null) inode=14406 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=60 name=(null) inode=14404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=61 name=(null) inode=14407 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=62 name=(null) inode=14407 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=63 name=(null) inode=14408 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=64 name=(null) inode=14407 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=65 name=(null) inode=14409 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=66 name=(null) inode=14407 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=67 name=(null) inode=14410 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=68 name=(null) inode=14407 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=69 name=(null) inode=14411 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=70 name=(null) inode=14407 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=71 name=(null) inode=14412 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=72 name=(null) inode=14404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=73 name=(null) inode=14413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=74 name=(null) inode=14413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=75 name=(null) inode=14414 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=76 name=(null) inode=14413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=77 name=(null) inode=14415 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=78 name=(null) inode=14413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=79 name=(null) inode=14416 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=80 name=(null) inode=14413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=81 name=(null) inode=14417 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=82 name=(null) inode=14413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=83 name=(null) inode=14418 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=84 name=(null) inode=14404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=85 name=(null) inode=14419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=86 name=(null) inode=14419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=87 name=(null) inode=14420 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=88 name=(null) inode=14419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=89 name=(null) inode=14421 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=90 name=(null) inode=14419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=91 name=(null) inode=14422 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=92 name=(null) inode=14419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=93 name=(null) inode=14423 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=94 name=(null) inode=14419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=95 name=(null) inode=14424 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=96 name=(null) inode=14404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=97 name=(null) inode=14425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=98 name=(null) inode=14425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=99 name=(null) inode=14426 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=100 name=(null) inode=14425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=101 name=(null) inode=14427 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=102 name=(null) inode=14425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=103 name=(null) inode=14428 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=104 name=(null) inode=14425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=105 name=(null) inode=14429 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=106 name=(null) inode=14425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=107 name=(null) inode=14430 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PATH item=109 name=(null) inode=14431 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 19:52:45.110000 audit: PROCTITLE proctitle="(udev-worker)" Mar 17 19:52:45.160990 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Mar 17 19:52:45.167721 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 17 19:52:45.178245 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 19:52:45.179707 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 19:52:45.439403 systemd[1]: Finished systemd-udev-settle.service. Mar 17 19:52:45.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:45.443257 systemd[1]: Starting lvm2-activation-early.service... Mar 17 19:52:45.462644 systemd-networkd[989]: lo: Link UP Mar 17 19:52:45.463245 systemd-networkd[989]: lo: Gained carrier Mar 17 19:52:45.464640 systemd-networkd[989]: Enumeration completed Mar 17 19:52:45.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:45.465003 systemd[1]: Started systemd-networkd.service. Mar 17 19:52:45.467015 systemd-networkd[989]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 19:52:45.471239 systemd-networkd[989]: eth0: Link UP Mar 17 19:52:45.471463 systemd-networkd[989]: eth0: Gained carrier Mar 17 19:52:45.485903 systemd-networkd[989]: eth0: DHCPv4 address 172.24.4.126/24, gateway 172.24.4.1 acquired from 172.24.4.1 Mar 17 19:52:45.488119 lvm[1002]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 19:52:45.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:45.526499 systemd[1]: Finished lvm2-activation-early.service. Mar 17 19:52:45.527970 systemd[1]: Reached target cryptsetup.target. Mar 17 19:52:45.531300 systemd[1]: Starting lvm2-activation.service... Mar 17 19:52:45.539825 lvm[1003]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 19:52:45.577610 systemd[1]: Finished lvm2-activation.service. Mar 17 19:52:45.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:45.579081 systemd[1]: Reached target local-fs-pre.target. Mar 17 19:52:45.580270 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 19:52:45.580330 systemd[1]: Reached target local-fs.target. Mar 17 19:52:45.581494 systemd[1]: Reached target machines.target. Mar 17 19:52:45.585141 systemd[1]: Starting ldconfig.service... Mar 17 19:52:45.587441 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 19:52:45.587533 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 19:52:45.589596 systemd[1]: Starting systemd-boot-update.service... Mar 17 19:52:45.593485 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 19:52:45.596998 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 19:52:45.600974 systemd[1]: Starting systemd-sysext.service... Mar 17 19:52:45.619315 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1005 (bootctl) Mar 17 19:52:45.621907 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 19:52:45.654878 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 19:52:45.675960 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 19:52:45.676301 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 19:52:45.692497 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 19:52:45.693565 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 19:52:45.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:45.705748 kernel: loop0: detected capacity change from 0 to 210664 Mar 17 19:52:45.711662 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 19:52:45.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:45.772722 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 19:52:45.805748 kernel: loop1: detected capacity change from 0 to 210664 Mar 17 19:52:45.854349 (sd-sysext)[1019]: Using extensions 'kubernetes'. Mar 17 19:52:45.856810 (sd-sysext)[1019]: Merged extensions into '/usr'. Mar 17 19:52:45.878571 systemd-fsck[1016]: fsck.fat 4.2 (2021-01-31) Mar 17 19:52:45.878571 systemd-fsck[1016]: /dev/vda1: 789 files, 119299/258078 clusters Mar 17 19:52:45.904187 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 19:52:45.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:45.913390 systemd[1]: Mounting boot.mount... Mar 17 19:52:45.916895 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 19:52:45.919808 systemd[1]: Mounting usr-share-oem.mount... Mar 17 19:52:45.921380 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 19:52:45.924994 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 19:52:45.929863 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 19:52:45.935374 systemd[1]: Starting modprobe@loop.service... Mar 17 19:52:45.941299 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 19:52:45.941455 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 19:52:45.941596 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 19:52:45.945205 systemd[1]: Mounted usr-share-oem.mount. Mar 17 19:52:45.946156 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 19:52:45.946291 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 19:52:45.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:45.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:45.947239 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 19:52:45.947369 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 19:52:45.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:45.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:45.948296 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 19:52:45.948418 systemd[1]: Finished modprobe@loop.service. Mar 17 19:52:45.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:45.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:45.949437 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 19:52:45.949561 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 19:52:45.950868 systemd[1]: Finished systemd-sysext.service. Mar 17 19:52:45.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:45.952811 systemd[1]: Starting ensure-sysext.service... Mar 17 19:52:45.959157 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 19:52:45.963941 systemd[1]: Reloading. Mar 17 19:52:45.997807 systemd-tmpfiles[1027]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 19:52:46.014478 systemd-tmpfiles[1027]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 19:52:46.031756 systemd-tmpfiles[1027]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 19:52:46.038045 /usr/lib/systemd/system-generators/torcx-generator[1046]: time="2025-03-17T19:52:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 19:52:46.038077 /usr/lib/systemd/system-generators/torcx-generator[1046]: time="2025-03-17T19:52:46Z" level=info msg="torcx already run" Mar 17 19:52:46.160722 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 19:52:46.160741 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 19:52:46.185093 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 19:52:46.247000 audit: BPF prog-id=30 op=LOAD Mar 17 19:52:46.247000 audit: BPF prog-id=31 op=LOAD Mar 17 19:52:46.247000 audit: BPF prog-id=24 op=UNLOAD Mar 17 19:52:46.247000 audit: BPF prog-id=25 op=UNLOAD Mar 17 19:52:46.249000 audit: BPF prog-id=32 op=LOAD Mar 17 19:52:46.249000 audit: BPF prog-id=26 op=UNLOAD Mar 17 19:52:46.249000 audit: BPF prog-id=33 op=LOAD Mar 17 19:52:46.249000 audit: BPF prog-id=27 op=UNLOAD Mar 17 19:52:46.249000 audit: BPF prog-id=34 op=LOAD Mar 17 19:52:46.249000 audit: BPF prog-id=35 op=LOAD Mar 17 19:52:46.249000 audit: BPF prog-id=28 op=UNLOAD Mar 17 19:52:46.249000 audit: BPF prog-id=29 op=UNLOAD Mar 17 19:52:46.251000 audit: BPF prog-id=36 op=LOAD Mar 17 19:52:46.251000 audit: BPF prog-id=21 op=UNLOAD Mar 17 19:52:46.251000 audit: BPF prog-id=37 op=LOAD Mar 17 19:52:46.251000 audit: BPF prog-id=38 op=LOAD Mar 17 19:52:46.251000 audit: BPF prog-id=22 op=UNLOAD Mar 17 19:52:46.251000 audit: BPF prog-id=23 op=UNLOAD Mar 17 19:52:46.256278 systemd[1]: Mounted boot.mount. Mar 17 19:52:46.274110 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 19:52:46.275718 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 19:52:46.278929 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 19:52:46.282745 systemd[1]: Starting modprobe@loop.service... Mar 17 19:52:46.283354 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 19:52:46.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.283472 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 19:52:46.284384 systemd[1]: Finished systemd-boot-update.service. Mar 17 19:52:46.285296 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 19:52:46.285416 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 19:52:46.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.286475 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 19:52:46.286590 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 19:52:46.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.287503 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 19:52:46.287612 systemd[1]: Finished modprobe@loop.service. Mar 17 19:52:46.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.288525 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 19:52:46.288633 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 19:52:46.290936 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 19:52:46.292200 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 19:52:46.295263 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 19:52:46.298260 systemd[1]: Starting modprobe@loop.service... Mar 17 19:52:46.300303 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 19:52:46.300447 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 19:52:46.301419 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 19:52:46.301549 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 19:52:46.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.302551 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 19:52:46.302729 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 19:52:46.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.305279 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 19:52:46.305406 systemd[1]: Finished modprobe@loop.service. Mar 17 19:52:46.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.311739 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 19:52:46.313000 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 19:52:46.315026 systemd[1]: Starting modprobe@drm.service... Mar 17 19:52:46.317981 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 19:52:46.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.319987 systemd[1]: Starting modprobe@loop.service... Mar 17 19:52:46.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.320555 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 19:52:46.320688 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 19:52:46.321934 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 19:52:46.323829 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 19:52:46.323970 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 19:52:46.325154 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 19:52:46.325718 systemd[1]: Finished modprobe@drm.service. Mar 17 19:52:46.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.327920 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 19:52:46.328041 systemd[1]: Finished modprobe@loop.service. Mar 17 19:52:46.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.329153 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 19:52:46.331137 systemd[1]: Finished ensure-sysext.service. Mar 17 19:52:46.332299 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 19:52:46.333070 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 19:52:46.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.334310 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 19:52:46.380833 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 19:52:46.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.382588 systemd[1]: Starting audit-rules.service... Mar 17 19:52:46.384172 systemd[1]: Starting clean-ca-certificates.service... Mar 17 19:52:46.385763 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 19:52:46.386000 audit: BPF prog-id=39 op=LOAD Mar 17 19:52:46.388063 systemd[1]: Starting systemd-resolved.service... Mar 17 19:52:46.390000 audit: BPF prog-id=40 op=LOAD Mar 17 19:52:46.392286 systemd[1]: Starting systemd-timesyncd.service... Mar 17 19:52:46.395789 systemd[1]: Starting systemd-update-utmp.service... Mar 17 19:52:46.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.399649 systemd[1]: Finished clean-ca-certificates.service. Mar 17 19:52:46.400297 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 19:52:46.403000 audit[1111]: SYSTEM_BOOT pid=1111 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.407816 systemd[1]: Finished systemd-update-utmp.service. Mar 17 19:52:46.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.425241 ldconfig[1004]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 19:52:46.438937 systemd[1]: Finished ldconfig.service. Mar 17 19:52:46.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.444430 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 19:52:46.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.446255 systemd[1]: Starting systemd-update-done.service... Mar 17 19:52:46.452377 systemd[1]: Finished systemd-update-done.service. Mar 17 19:52:46.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.475441 systemd[1]: Started systemd-timesyncd.service. Mar 17 19:52:46.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 19:52:46.476864 systemd[1]: Reached target time-set.target. Mar 17 19:52:46.480000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 19:52:46.480000 audit[1126]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffff069f6a0 a2=420 a3=0 items=0 ppid=1105 pid=1126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 19:52:46.480000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 19:52:46.481353 augenrules[1126]: No rules Mar 17 19:52:46.481490 systemd[1]: Finished audit-rules.service. Mar 17 19:52:46.497067 systemd-resolved[1108]: Positive Trust Anchors: Mar 17 19:52:46.497335 systemd-resolved[1108]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 19:52:46.497424 systemd-resolved[1108]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 19:52:46.504431 systemd-resolved[1108]: Using system hostname 'ci-3510-3-7-f-d5e02b2809.novalocal'. Mar 17 19:52:46.505846 systemd[1]: Started systemd-resolved.service. Mar 17 19:52:46.506429 systemd[1]: Reached target network.target. Mar 17 19:52:46.506927 systemd[1]: Reached target nss-lookup.target. Mar 17 19:52:46.507428 systemd[1]: Reached target sysinit.target. Mar 17 19:52:46.507983 systemd[1]: Started motdgen.path. Mar 17 19:52:46.508446 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 19:52:46.509085 systemd[1]: Started logrotate.timer. Mar 17 19:52:46.509636 systemd[1]: Started mdadm.timer. Mar 17 19:52:46.510077 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 19:52:46.510523 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 19:52:46.510548 systemd[1]: Reached target paths.target. Mar 17 19:52:46.511010 systemd[1]: Reached target timers.target. Mar 17 19:52:46.511709 systemd[1]: Listening on dbus.socket. Mar 17 19:52:46.513093 systemd[1]: Starting docker.socket... Mar 17 19:52:46.516503 systemd[1]: Listening on sshd.socket. Mar 17 19:52:46.517154 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 19:52:46.517627 systemd[1]: Listening on docker.socket. Mar 17 19:52:46.518197 systemd[1]: Reached target sockets.target. Mar 17 19:52:46.518640 systemd[1]: Reached target basic.target. Mar 17 19:52:46.519127 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 19:52:46.519153 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 19:52:46.519962 systemd[1]: Starting containerd.service... Mar 17 19:52:46.521980 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Mar 17 19:52:46.523384 systemd[1]: Starting dbus.service... Mar 17 19:52:46.525775 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 19:52:46.528333 systemd[1]: Starting extend-filesystems.service... Mar 17 19:52:46.534625 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 19:52:46.535907 systemd[1]: Starting motdgen.service... Mar 17 19:52:46.537379 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 19:52:46.540358 systemd[1]: Starting sshd-keygen.service... Mar 17 19:52:46.546053 systemd[1]: Starting systemd-logind.service... Mar 17 19:52:46.546784 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 19:52:46.547217 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 19:52:46.547769 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 19:52:46.565417 jq[1139]: false Mar 17 19:52:46.549851 systemd[1]: Starting update-engine.service... Mar 17 19:52:46.565700 jq[1149]: true Mar 17 19:52:46.551608 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 19:52:46.566173 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 19:52:46.566356 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 19:52:46.572188 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 19:52:46.572380 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 19:52:46.593602 jq[1160]: true Mar 17 19:52:46.604357 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 19:52:46.604541 systemd[1]: Finished motdgen.service. Mar 17 19:52:47.247320 extend-filesystems[1140]: Found loop1 Mar 17 19:52:47.247320 extend-filesystems[1140]: Found vda Mar 17 19:52:47.247320 extend-filesystems[1140]: Found vda1 Mar 17 19:52:47.247137 systemd-resolved[1108]: Clock change detected. Flushing caches. Mar 17 19:52:47.249517 extend-filesystems[1140]: Found vda2 Mar 17 19:52:47.249517 extend-filesystems[1140]: Found vda3 Mar 17 19:52:47.249517 extend-filesystems[1140]: Found usr Mar 17 19:52:47.249517 extend-filesystems[1140]: Found vda4 Mar 17 19:52:47.249517 extend-filesystems[1140]: Found vda6 Mar 17 19:52:47.249517 extend-filesystems[1140]: Found vda7 Mar 17 19:52:47.249517 extend-filesystems[1140]: Found vda9 Mar 17 19:52:47.249517 extend-filesystems[1140]: Checking size of /dev/vda9 Mar 17 19:52:47.247247 systemd-timesyncd[1109]: Contacted time server 135.134.111.122:123 (0.flatcar.pool.ntp.org). Mar 17 19:52:47.247294 systemd-timesyncd[1109]: Initial clock synchronization to Mon 2025-03-17 19:52:47.247094 UTC. Mar 17 19:52:47.272309 extend-filesystems[1140]: Resized partition /dev/vda9 Mar 17 19:52:47.284618 extend-filesystems[1185]: resize2fs 1.46.5 (30-Dec-2021) Mar 17 19:52:47.285031 dbus-daemon[1136]: [system] SELinux support is enabled Mar 17 19:52:47.285180 systemd[1]: Started dbus.service. Mar 17 19:52:47.291214 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 19:52:47.291255 systemd[1]: Reached target system-config.target. Mar 17 19:52:47.291850 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 19:52:47.291878 systemd[1]: Reached target user-config.target. Mar 17 19:52:47.306649 env[1154]: time="2025-03-17T19:52:47.306297798Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 19:52:47.310999 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 19:52:47.311029 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 19:52:47.346402 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Mar 17 19:52:47.353080 update_engine[1148]: I0317 19:52:47.349246 1148 main.cc:92] Flatcar Update Engine starting Mar 17 19:52:47.355381 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Mar 17 19:52:47.417456 update_engine[1148]: I0317 19:52:47.361515 1148 update_check_scheduler.cc:74] Next update check in 7m1s Mar 17 19:52:47.417562 env[1154]: time="2025-03-17T19:52:47.365686341Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 19:52:47.361471 systemd[1]: Started update-engine.service. Mar 17 19:52:47.417738 env[1154]: time="2025-03-17T19:52:47.417561313Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 19:52:47.363494 systemd[1]: Started locksmithd.service. Mar 17 19:52:47.418160 systemd-logind[1146]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 19:52:47.420490 extend-filesystems[1185]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 19:52:47.420490 extend-filesystems[1185]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 19:52:47.420490 extend-filesystems[1185]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Mar 17 19:52:47.418182 systemd-logind[1146]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 19:52:47.429891 env[1154]: time="2025-03-17T19:52:47.421826973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 19:52:47.429891 env[1154]: time="2025-03-17T19:52:47.421890602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 19:52:47.429891 env[1154]: time="2025-03-17T19:52:47.422757568Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 19:52:47.429891 env[1154]: time="2025-03-17T19:52:47.422832358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 19:52:47.429891 env[1154]: time="2025-03-17T19:52:47.422865901Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 19:52:47.429891 env[1154]: time="2025-03-17T19:52:47.422894244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 19:52:47.429891 env[1154]: time="2025-03-17T19:52:47.423142751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 19:52:47.429891 env[1154]: time="2025-03-17T19:52:47.423960063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 19:52:47.429891 env[1154]: time="2025-03-17T19:52:47.424439823Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 19:52:47.429891 env[1154]: time="2025-03-17T19:52:47.424533158Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 19:52:47.430163 extend-filesystems[1140]: Resized filesystem in /dev/vda9 Mar 17 19:52:47.420183 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 19:52:47.420567 systemd[1]: Finished extend-filesystems.service. Mar 17 19:52:47.421461 systemd-logind[1146]: New seat seat0. Mar 17 19:52:47.431563 systemd[1]: Started systemd-logind.service. Mar 17 19:52:47.434195 env[1154]: time="2025-03-17T19:52:47.434062821Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 19:52:47.434295 env[1154]: time="2025-03-17T19:52:47.434206631Z" level=info msg="metadata content store policy set" policy=shared Mar 17 19:52:47.459776 bash[1186]: Updated "/home/core/.ssh/authorized_keys" Mar 17 19:52:47.460979 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 19:52:47.477308 env[1154]: time="2025-03-17T19:52:47.477227917Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 19:52:47.477383 env[1154]: time="2025-03-17T19:52:47.477329628Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 19:52:47.477491 env[1154]: time="2025-03-17T19:52:47.477449433Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 19:52:47.477713 env[1154]: time="2025-03-17T19:52:47.477574898Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 19:52:47.477779 env[1154]: time="2025-03-17T19:52:47.477746520Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 19:52:47.478496 env[1154]: time="2025-03-17T19:52:47.478461892Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 19:52:47.478574 env[1154]: time="2025-03-17T19:52:47.478510292Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 19:52:47.478613 env[1154]: time="2025-03-17T19:52:47.478587577Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 19:52:47.478682 env[1154]: time="2025-03-17T19:52:47.478652990Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 19:52:47.478753 env[1154]: time="2025-03-17T19:52:47.478694638Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 19:52:47.478787 env[1154]: time="2025-03-17T19:52:47.478765832Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 19:52:47.478859 env[1154]: time="2025-03-17T19:52:47.478797932Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 19:52:47.479193 env[1154]: time="2025-03-17T19:52:47.479156845Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 19:52:47.479558 env[1154]: time="2025-03-17T19:52:47.479520146Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 19:52:47.480506 env[1154]: time="2025-03-17T19:52:47.480468104Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 19:52:47.480599 env[1154]: time="2025-03-17T19:52:47.480568002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 19:52:47.480672 env[1154]: time="2025-03-17T19:52:47.480643313Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 19:52:47.480903 env[1154]: time="2025-03-17T19:52:47.480781452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 19:52:47.480980 env[1154]: time="2025-03-17T19:52:47.480949247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 19:52:47.481052 env[1154]: time="2025-03-17T19:52:47.480991606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 19:52:47.481087 env[1154]: time="2025-03-17T19:52:47.481061477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 19:52:47.481152 env[1154]: time="2025-03-17T19:52:47.481123904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 19:52:47.481183 env[1154]: time="2025-03-17T19:52:47.481163468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 19:52:47.481253 env[1154]: time="2025-03-17T19:52:47.481225054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 19:52:47.481284 env[1154]: time="2025-03-17T19:52:47.481261763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 19:52:47.481365 env[1154]: time="2025-03-17T19:52:47.481322647Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 19:52:47.481849 env[1154]: time="2025-03-17T19:52:47.481811895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 19:52:47.481925 env[1154]: time="2025-03-17T19:52:47.481895922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 19:52:47.481997 env[1154]: time="2025-03-17T19:52:47.481937370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 19:52:47.482030 env[1154]: time="2025-03-17T19:52:47.482007421Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 19:52:47.482113 env[1154]: time="2025-03-17T19:52:47.482077753Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 19:52:47.482147 env[1154]: time="2025-03-17T19:52:47.482115815Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 19:52:47.482220 env[1154]: time="2025-03-17T19:52:47.482186788Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 19:52:47.482340 env[1154]: time="2025-03-17T19:52:47.482307885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 19:52:47.483213 env[1154]: time="2025-03-17T19:52:47.483049075Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 19:52:47.487120 env[1154]: time="2025-03-17T19:52:47.483240915Z" level=info msg="Connect containerd service" Mar 17 19:52:47.487120 env[1154]: time="2025-03-17T19:52:47.483334160Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 19:52:47.487120 env[1154]: time="2025-03-17T19:52:47.485756463Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 19:52:47.487120 env[1154]: time="2025-03-17T19:52:47.486253575Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 19:52:47.487835 env[1154]: time="2025-03-17T19:52:47.487748198Z" level=info msg="Start subscribing containerd event" Mar 17 19:52:47.487884 env[1154]: time="2025-03-17T19:52:47.487864626Z" level=info msg="Start recovering state" Mar 17 19:52:47.488012 env[1154]: time="2025-03-17T19:52:47.487980574Z" level=info msg="Start event monitor" Mar 17 19:52:47.488047 env[1154]: time="2025-03-17T19:52:47.488023464Z" level=info msg="Start snapshots syncer" Mar 17 19:52:47.488074 env[1154]: time="2025-03-17T19:52:47.488045456Z" level=info msg="Start cni network conf syncer for default" Mar 17 19:52:47.488074 env[1154]: time="2025-03-17T19:52:47.488063940Z" level=info msg="Start streaming server" Mar 17 19:52:47.488624 env[1154]: time="2025-03-17T19:52:47.488585659Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 19:52:47.490455 env[1154]: time="2025-03-17T19:52:47.489715698Z" level=info msg="containerd successfully booted in 0.186588s" Mar 17 19:52:47.489870 systemd[1]: Started containerd.service. Mar 17 19:52:47.590886 locksmithd[1191]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 19:52:47.802956 systemd[1]: Created slice system-sshd.slice. Mar 17 19:52:48.002702 systemd-networkd[989]: eth0: Gained IPv6LL Mar 17 19:52:48.005688 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 19:52:48.007582 systemd[1]: Reached target network-online.target. Mar 17 19:52:48.021403 systemd[1]: Starting kubelet.service... Mar 17 19:52:48.582864 sshd_keygen[1162]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 19:52:48.626701 systemd[1]: Finished sshd-keygen.service. Mar 17 19:52:48.631329 systemd[1]: Starting issuegen.service... Mar 17 19:52:48.634931 systemd[1]: Started sshd@0-172.24.4.126:22-172.24.4.1:57466.service. Mar 17 19:52:48.641829 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 19:52:48.642008 systemd[1]: Finished issuegen.service. Mar 17 19:52:48.644119 systemd[1]: Starting systemd-user-sessions.service... Mar 17 19:52:48.659564 systemd[1]: Finished systemd-user-sessions.service. Mar 17 19:52:48.661692 systemd[1]: Started getty@tty1.service. Mar 17 19:52:48.663452 systemd[1]: Started serial-getty@ttyS0.service. Mar 17 19:52:48.664188 systemd[1]: Reached target getty.target. Mar 17 19:52:49.976184 systemd[1]: Started kubelet.service. Mar 17 19:52:49.996225 sshd[1211]: Accepted publickey for core from 172.24.4.1 port 57466 ssh2: RSA SHA256:0qismvO9/NycYojDPV3BgQur5FYKlC/KcDYVOn7KNLI Mar 17 19:52:50.000493 sshd[1211]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 19:52:50.029008 systemd[1]: Created slice user-500.slice. Mar 17 19:52:50.033615 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 19:52:50.049525 systemd-logind[1146]: New session 1 of user core. Mar 17 19:52:50.062574 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 19:52:50.065454 systemd[1]: Starting user@500.service... Mar 17 19:52:50.070635 (systemd)[1222]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 19:52:50.168760 systemd[1222]: Queued start job for default target default.target. Mar 17 19:52:50.169588 systemd[1222]: Reached target paths.target. Mar 17 19:52:50.169716 systemd[1222]: Reached target sockets.target. Mar 17 19:52:50.169807 systemd[1222]: Reached target timers.target. Mar 17 19:52:50.169895 systemd[1222]: Reached target basic.target. Mar 17 19:52:50.170074 systemd[1]: Started user@500.service. Mar 17 19:52:50.171491 systemd[1]: Started session-1.scope. Mar 17 19:52:50.172773 systemd[1222]: Reached target default.target. Mar 17 19:52:50.173016 systemd[1222]: Startup finished in 92ms. Mar 17 19:52:50.758136 systemd[1]: Started sshd@1-172.24.4.126:22-172.24.4.1:57480.service. Mar 17 19:52:51.594054 kubelet[1220]: E0317 19:52:51.593965 1220 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 19:52:51.598638 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 19:52:51.598910 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 19:52:51.599486 systemd[1]: kubelet.service: Consumed 2.183s CPU time. Mar 17 19:52:52.895574 sshd[1237]: Accepted publickey for core from 172.24.4.1 port 57480 ssh2: RSA SHA256:0qismvO9/NycYojDPV3BgQur5FYKlC/KcDYVOn7KNLI Mar 17 19:52:52.898515 sshd[1237]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 19:52:52.909500 systemd-logind[1146]: New session 2 of user core. Mar 17 19:52:52.909679 systemd[1]: Started session-2.scope. Mar 17 19:52:53.379115 sshd[1237]: pam_unix(sshd:session): session closed for user core Mar 17 19:52:53.385330 systemd[1]: Started sshd@2-172.24.4.126:22-172.24.4.1:57494.service. Mar 17 19:52:53.390448 systemd[1]: sshd@1-172.24.4.126:22-172.24.4.1:57480.service: Deactivated successfully. Mar 17 19:52:53.391991 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 19:52:53.395858 systemd-logind[1146]: Session 2 logged out. Waiting for processes to exit. Mar 17 19:52:53.399567 systemd-logind[1146]: Removed session 2. Mar 17 19:52:54.260889 coreos-metadata[1135]: Mar 17 19:52:54.260 WARN failed to locate config-drive, using the metadata service API instead Mar 17 19:52:54.348656 coreos-metadata[1135]: Mar 17 19:52:54.348 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Mar 17 19:52:54.529271 coreos-metadata[1135]: Mar 17 19:52:54.528 INFO Fetch successful Mar 17 19:52:54.529271 coreos-metadata[1135]: Mar 17 19:52:54.529 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 17 19:52:54.541815 coreos-metadata[1135]: Mar 17 19:52:54.541 INFO Fetch successful Mar 17 19:52:54.551016 unknown[1135]: wrote ssh authorized keys file for user: core Mar 17 19:52:54.598758 update-ssh-keys[1248]: Updated "/home/core/.ssh/authorized_keys" Mar 17 19:52:54.600297 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Mar 17 19:52:54.601184 systemd[1]: Reached target multi-user.target. Mar 17 19:52:54.604100 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 19:52:54.620229 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 19:52:54.620876 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 19:52:54.624477 systemd[1]: Startup finished in 964ms (kernel) + 6.482s (initrd) + 13.873s (userspace) = 21.320s. Mar 17 19:52:54.677164 sshd[1242]: Accepted publickey for core from 172.24.4.1 port 57494 ssh2: RSA SHA256:0qismvO9/NycYojDPV3BgQur5FYKlC/KcDYVOn7KNLI Mar 17 19:52:54.680084 sshd[1242]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 19:52:54.692005 systemd-logind[1146]: New session 3 of user core. Mar 17 19:52:54.693221 systemd[1]: Started session-3.scope. Mar 17 19:52:55.388042 sshd[1242]: pam_unix(sshd:session): session closed for user core Mar 17 19:52:55.393270 systemd-logind[1146]: Session 3 logged out. Waiting for processes to exit. Mar 17 19:52:55.396127 systemd[1]: sshd@2-172.24.4.126:22-172.24.4.1:57494.service: Deactivated successfully. Mar 17 19:52:55.397610 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 19:52:55.398600 systemd-logind[1146]: Removed session 3. Mar 17 19:53:01.739629 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 19:53:01.740238 systemd[1]: Stopped kubelet.service. Mar 17 19:53:01.740340 systemd[1]: kubelet.service: Consumed 2.183s CPU time. Mar 17 19:53:01.743941 systemd[1]: Starting kubelet.service... Mar 17 19:53:02.039973 systemd[1]: Started kubelet.service. Mar 17 19:53:02.112746 kubelet[1257]: E0317 19:53:02.112676 1257 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 19:53:02.120145 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 19:53:02.120460 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 19:53:05.400819 systemd[1]: Started sshd@3-172.24.4.126:22-172.24.4.1:34544.service. Mar 17 19:53:06.806405 sshd[1264]: Accepted publickey for core from 172.24.4.1 port 34544 ssh2: RSA SHA256:0qismvO9/NycYojDPV3BgQur5FYKlC/KcDYVOn7KNLI Mar 17 19:53:06.809697 sshd[1264]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 19:53:06.820310 systemd[1]: Started session-4.scope. Mar 17 19:53:06.821510 systemd-logind[1146]: New session 4 of user core. Mar 17 19:53:07.378071 sshd[1264]: pam_unix(sshd:session): session closed for user core Mar 17 19:53:07.384482 systemd[1]: sshd@3-172.24.4.126:22-172.24.4.1:34544.service: Deactivated successfully. Mar 17 19:53:07.386090 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 19:53:07.387829 systemd-logind[1146]: Session 4 logged out. Waiting for processes to exit. Mar 17 19:53:07.390814 systemd[1]: Started sshd@4-172.24.4.126:22-172.24.4.1:34558.service. Mar 17 19:53:07.393783 systemd-logind[1146]: Removed session 4. Mar 17 19:53:08.754993 sshd[1270]: Accepted publickey for core from 172.24.4.1 port 34558 ssh2: RSA SHA256:0qismvO9/NycYojDPV3BgQur5FYKlC/KcDYVOn7KNLI Mar 17 19:53:08.757468 sshd[1270]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 19:53:08.767964 systemd-logind[1146]: New session 5 of user core. Mar 17 19:53:08.768283 systemd[1]: Started session-5.scope. Mar 17 19:53:09.379541 sshd[1270]: pam_unix(sshd:session): session closed for user core Mar 17 19:53:09.386233 systemd[1]: Started sshd@5-172.24.4.126:22-172.24.4.1:34574.service. Mar 17 19:53:09.387454 systemd[1]: sshd@4-172.24.4.126:22-172.24.4.1:34558.service: Deactivated successfully. Mar 17 19:53:09.388906 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 19:53:09.391603 systemd-logind[1146]: Session 5 logged out. Waiting for processes to exit. Mar 17 19:53:09.394014 systemd-logind[1146]: Removed session 5. Mar 17 19:53:10.560605 sshd[1275]: Accepted publickey for core from 172.24.4.1 port 34574 ssh2: RSA SHA256:0qismvO9/NycYojDPV3BgQur5FYKlC/KcDYVOn7KNLI Mar 17 19:53:10.563054 sshd[1275]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 19:53:10.573073 systemd-logind[1146]: New session 6 of user core. Mar 17 19:53:10.573818 systemd[1]: Started session-6.scope. Mar 17 19:53:11.338420 sshd[1275]: pam_unix(sshd:session): session closed for user core Mar 17 19:53:11.344477 systemd[1]: Started sshd@6-172.24.4.126:22-172.24.4.1:34590.service. Mar 17 19:53:11.346561 systemd[1]: sshd@5-172.24.4.126:22-172.24.4.1:34574.service: Deactivated successfully. Mar 17 19:53:11.347951 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 19:53:11.351260 systemd-logind[1146]: Session 6 logged out. Waiting for processes to exit. Mar 17 19:53:11.354349 systemd-logind[1146]: Removed session 6. Mar 17 19:53:12.239694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 19:53:12.240112 systemd[1]: Stopped kubelet.service. Mar 17 19:53:12.242883 systemd[1]: Starting kubelet.service... Mar 17 19:53:12.517471 systemd[1]: Started kubelet.service. Mar 17 19:53:12.621216 kubelet[1288]: E0317 19:53:12.621181 1288 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 19:53:12.624839 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 19:53:12.625141 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 19:53:12.961266 sshd[1281]: Accepted publickey for core from 172.24.4.1 port 34590 ssh2: RSA SHA256:0qismvO9/NycYojDPV3BgQur5FYKlC/KcDYVOn7KNLI Mar 17 19:53:12.961926 sshd[1281]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 19:53:12.971779 systemd-logind[1146]: New session 7 of user core. Mar 17 19:53:12.972568 systemd[1]: Started session-7.scope. Mar 17 19:53:13.414281 sudo[1295]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 19:53:13.414855 sudo[1295]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 19:53:13.440870 systemd[1]: Starting coreos-metadata.service... Mar 17 19:53:20.503903 coreos-metadata[1299]: Mar 17 19:53:20.503 WARN failed to locate config-drive, using the metadata service API instead Mar 17 19:53:20.596332 coreos-metadata[1299]: Mar 17 19:53:20.596 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 17 19:53:20.876870 coreos-metadata[1299]: Mar 17 19:53:20.876 INFO Fetch successful Mar 17 19:53:20.877184 coreos-metadata[1299]: Mar 17 19:53:20.877 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Mar 17 19:53:20.891769 coreos-metadata[1299]: Mar 17 19:53:20.891 INFO Fetch successful Mar 17 19:53:20.892036 coreos-metadata[1299]: Mar 17 19:53:20.891 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Mar 17 19:53:20.905255 coreos-metadata[1299]: Mar 17 19:53:20.905 INFO Fetch successful Mar 17 19:53:20.905601 coreos-metadata[1299]: Mar 17 19:53:20.905 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Mar 17 19:53:20.919043 coreos-metadata[1299]: Mar 17 19:53:20.918 INFO Fetch successful Mar 17 19:53:20.919327 coreos-metadata[1299]: Mar 17 19:53:20.919 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Mar 17 19:53:20.935731 coreos-metadata[1299]: Mar 17 19:53:20.935 INFO Fetch successful Mar 17 19:53:20.952288 systemd[1]: Finished coreos-metadata.service. Mar 17 19:53:22.630092 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 19:53:22.632138 systemd[1]: Stopped kubelet.service. Mar 17 19:53:22.635049 systemd[1]: Starting kubelet.service... Mar 17 19:53:22.943213 systemd[1]: Started kubelet.service. Mar 17 19:53:22.949040 systemd[1]: Stopping kubelet.service... Mar 17 19:53:22.950157 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 19:53:22.950319 systemd[1]: Stopped kubelet.service. Mar 17 19:53:22.952492 systemd[1]: Starting kubelet.service... Mar 17 19:53:22.977328 systemd[1]: Reloading. Mar 17 19:53:23.075893 /usr/lib/systemd/system-generators/torcx-generator[1370]: time="2025-03-17T19:53:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 19:53:23.077774 /usr/lib/systemd/system-generators/torcx-generator[1370]: time="2025-03-17T19:53:23Z" level=info msg="torcx already run" Mar 17 19:53:23.367122 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 19:53:23.367144 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 19:53:23.389895 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 19:53:23.484033 systemd[1]: Started kubelet.service. Mar 17 19:53:23.485918 systemd[1]: Stopping kubelet.service... Mar 17 19:53:23.486894 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 19:53:23.487057 systemd[1]: Stopped kubelet.service. Mar 17 19:53:23.488865 systemd[1]: Starting kubelet.service... Mar 17 19:53:23.597582 systemd[1]: Started kubelet.service. Mar 17 19:53:23.651655 kubelet[1421]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 19:53:23.651655 kubelet[1421]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 19:53:23.651655 kubelet[1421]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 19:53:23.652470 kubelet[1421]: I0317 19:53:23.652420 1421 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 19:53:24.405599 kubelet[1421]: I0317 19:53:24.405526 1421 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 19:53:24.405599 kubelet[1421]: I0317 19:53:24.405588 1421 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 19:53:24.406149 kubelet[1421]: I0317 19:53:24.406110 1421 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 19:53:24.444458 kubelet[1421]: I0317 19:53:24.444434 1421 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 19:53:24.463572 kubelet[1421]: I0317 19:53:24.463553 1421 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 19:53:24.463942 kubelet[1421]: I0317 19:53:24.463913 1421 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 19:53:24.464278 kubelet[1421]: I0317 19:53:24.464007 1421 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.24.4.126","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 19:53:24.464494 kubelet[1421]: I0317 19:53:24.464481 1421 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 19:53:24.464564 kubelet[1421]: I0317 19:53:24.464555 1421 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 19:53:24.464716 kubelet[1421]: I0317 19:53:24.464703 1421 state_mem.go:36] "Initialized new in-memory state store" Mar 17 19:53:24.465961 kubelet[1421]: I0317 19:53:24.465948 1421 kubelet.go:400] "Attempting to sync node with API server" Mar 17 19:53:24.466079 kubelet[1421]: I0317 19:53:24.466068 1421 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 19:53:24.466154 kubelet[1421]: I0317 19:53:24.466146 1421 kubelet.go:312] "Adding apiserver pod source" Mar 17 19:53:24.466218 kubelet[1421]: I0317 19:53:24.466209 1421 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 19:53:24.466640 kubelet[1421]: E0317 19:53:24.466574 1421 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:24.466764 kubelet[1421]: E0317 19:53:24.466698 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:24.471834 kubelet[1421]: I0317 19:53:24.471818 1421 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 19:53:24.473866 kubelet[1421]: I0317 19:53:24.473852 1421 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 19:53:24.473996 kubelet[1421]: W0317 19:53:24.473985 1421 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 19:53:24.474629 kubelet[1421]: I0317 19:53:24.474618 1421 server.go:1264] "Started kubelet" Mar 17 19:53:24.474850 kubelet[1421]: I0317 19:53:24.474790 1421 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 19:53:24.477081 kubelet[1421]: I0317 19:53:24.477040 1421 server.go:455] "Adding debug handlers to kubelet server" Mar 17 19:53:24.480795 kubelet[1421]: I0317 19:53:24.480754 1421 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 19:53:24.481101 kubelet[1421]: I0317 19:53:24.481048 1421 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 19:53:24.485302 kubelet[1421]: E0317 19:53:24.485186 1421 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.24.4.126.182daf2509c1d9b1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.24.4.126,UID:172.24.4.126,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.24.4.126,},FirstTimestamp:2025-03-17 19:53:24.474599857 +0000 UTC m=+0.872262817,LastTimestamp:2025-03-17 19:53:24.474599857 +0000 UTC m=+0.872262817,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.24.4.126,}" Mar 17 19:53:24.487168 kubelet[1421]: E0317 19:53:24.487150 1421 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 19:53:24.492642 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 17 19:53:24.492984 kubelet[1421]: I0317 19:53:24.492952 1421 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 19:53:24.493165 kubelet[1421]: I0317 19:53:24.493154 1421 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 19:53:24.494744 kubelet[1421]: I0317 19:53:24.494732 1421 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 19:53:24.494872 kubelet[1421]: I0317 19:53:24.494862 1421 reconciler.go:26] "Reconciler: start to sync state" Mar 17 19:53:24.495226 kubelet[1421]: E0317 19:53:24.495213 1421 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.126\" not found" Mar 17 19:53:24.496915 kubelet[1421]: I0317 19:53:24.496899 1421 factory.go:221] Registration of the systemd container factory successfully Mar 17 19:53:24.497111 kubelet[1421]: I0317 19:53:24.497092 1421 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 19:53:24.500011 kubelet[1421]: I0317 19:53:24.499998 1421 factory.go:221] Registration of the containerd container factory successfully Mar 17 19:53:24.505655 kubelet[1421]: E0317 19:53:24.504498 1421 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.24.4.126.182daf250a813615 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.24.4.126,UID:172.24.4.126,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.24.4.126,},FirstTimestamp:2025-03-17 19:53:24.487140885 +0000 UTC m=+0.884803855,LastTimestamp:2025-03-17 19:53:24.487140885 +0000 UTC m=+0.884803855,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.24.4.126,}" Mar 17 19:53:24.505655 kubelet[1421]: W0317 19:53:24.504848 1421 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.24.4.126" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 17 19:53:24.505655 kubelet[1421]: E0317 19:53:24.505211 1421 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.126" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 17 19:53:24.505655 kubelet[1421]: W0317 19:53:24.505537 1421 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 17 19:53:24.505655 kubelet[1421]: E0317 19:53:24.505575 1421 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 17 19:53:24.505883 kubelet[1421]: W0317 19:53:24.505852 1421 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 17 19:53:24.505957 kubelet[1421]: E0317 19:53:24.505928 1421 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 17 19:53:24.506324 kubelet[1421]: E0317 19:53:24.506279 1421 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.24.4.126\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Mar 17 19:53:24.529066 kubelet[1421]: I0317 19:53:24.529047 1421 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 19:53:24.529231 kubelet[1421]: I0317 19:53:24.529219 1421 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 19:53:24.529809 kubelet[1421]: I0317 19:53:24.529795 1421 state_mem.go:36] "Initialized new in-memory state store" Mar 17 19:53:24.536844 kubelet[1421]: E0317 19:53:24.536682 1421 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.24.4.126.182daf250ced2edb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.24.4.126,UID:172.24.4.126,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 172.24.4.126 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:172.24.4.126,},FirstTimestamp:2025-03-17 19:53:24.527771355 +0000 UTC m=+0.925434325,LastTimestamp:2025-03-17 19:53:24.527771355 +0000 UTC m=+0.925434325,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.24.4.126,}" Mar 17 19:53:24.537285 kubelet[1421]: I0317 19:53:24.537257 1421 policy_none.go:49] "None policy: Start" Mar 17 19:53:24.539522 kubelet[1421]: I0317 19:53:24.539510 1421 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 19:53:24.539624 kubelet[1421]: I0317 19:53:24.539615 1421 state_mem.go:35] "Initializing new in-memory state store" Mar 17 19:53:24.550562 systemd[1]: Created slice kubepods.slice. Mar 17 19:53:24.554827 systemd[1]: Created slice kubepods-burstable.slice. Mar 17 19:53:24.559286 systemd[1]: Created slice kubepods-besteffort.slice. Mar 17 19:53:24.574255 kubelet[1421]: I0317 19:53:24.574233 1421 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 19:53:24.574532 kubelet[1421]: I0317 19:53:24.574497 1421 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 19:53:24.574674 kubelet[1421]: I0317 19:53:24.574662 1421 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 19:53:24.576608 kubelet[1421]: E0317 19:53:24.576578 1421 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.126\" not found" Mar 17 19:53:24.595968 kubelet[1421]: I0317 19:53:24.595934 1421 kubelet_node_status.go:73] "Attempting to register node" node="172.24.4.126" Mar 17 19:53:24.603191 kubelet[1421]: I0317 19:53:24.603169 1421 kubelet_node_status.go:76] "Successfully registered node" node="172.24.4.126" Mar 17 19:53:24.645087 kubelet[1421]: E0317 19:53:24.645039 1421 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.126\" not found" Mar 17 19:53:24.659166 kubelet[1421]: I0317 19:53:24.656351 1421 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 19:53:24.661915 sudo[1295]: pam_unix(sudo:session): session closed for user root Mar 17 19:53:24.662649 kubelet[1421]: I0317 19:53:24.662014 1421 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 19:53:24.662649 kubelet[1421]: I0317 19:53:24.662055 1421 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 19:53:24.662649 kubelet[1421]: I0317 19:53:24.662102 1421 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 19:53:24.662649 kubelet[1421]: E0317 19:53:24.662202 1421 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 17 19:53:24.746103 kubelet[1421]: E0317 19:53:24.745966 1421 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.126\" not found" Mar 17 19:53:24.847122 kubelet[1421]: E0317 19:53:24.847039 1421 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.126\" not found" Mar 17 19:53:24.948192 kubelet[1421]: E0317 19:53:24.947176 1421 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.126\" not found" Mar 17 19:53:25.048334 kubelet[1421]: E0317 19:53:25.048199 1421 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.126\" not found" Mar 17 19:53:25.073039 sshd[1281]: pam_unix(sshd:session): session closed for user core Mar 17 19:53:25.078657 systemd[1]: sshd@6-172.24.4.126:22-172.24.4.1:34590.service: Deactivated successfully. Mar 17 19:53:25.080189 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 19:53:25.080523 systemd[1]: session-7.scope: Consumed 1.170s CPU time. Mar 17 19:53:25.081707 systemd-logind[1146]: Session 7 logged out. Waiting for processes to exit. Mar 17 19:53:25.083827 systemd-logind[1146]: Removed session 7. Mar 17 19:53:25.149216 kubelet[1421]: E0317 19:53:25.149133 1421 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.126\" not found" Mar 17 19:53:25.250251 kubelet[1421]: E0317 19:53:25.250196 1421 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.126\" not found" Mar 17 19:53:25.351652 kubelet[1421]: E0317 19:53:25.351589 1421 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.126\" not found" Mar 17 19:53:25.410198 kubelet[1421]: I0317 19:53:25.410161 1421 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 17 19:53:25.410744 kubelet[1421]: W0317 19:53:25.410707 1421 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 17 19:53:25.452933 kubelet[1421]: E0317 19:53:25.452865 1421 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.126\" not found" Mar 17 19:53:25.467279 kubelet[1421]: E0317 19:53:25.467241 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:25.553597 kubelet[1421]: E0317 19:53:25.553426 1421 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.126\" not found" Mar 17 19:53:25.654041 kubelet[1421]: E0317 19:53:25.653971 1421 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.126\" not found" Mar 17 19:53:25.755232 kubelet[1421]: E0317 19:53:25.755088 1421 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.126\" not found" Mar 17 19:53:25.856413 kubelet[1421]: E0317 19:53:25.856191 1421 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.126\" not found" Mar 17 19:53:25.956725 kubelet[1421]: E0317 19:53:25.956679 1421 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.126\" not found" Mar 17 19:53:26.057092 kubelet[1421]: E0317 19:53:26.057021 1421 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.126\" not found" Mar 17 19:53:26.158916 kubelet[1421]: I0317 19:53:26.158810 1421 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Mar 17 19:53:26.159942 env[1154]: time="2025-03-17T19:53:26.159730353Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 19:53:26.160538 kubelet[1421]: I0317 19:53:26.160169 1421 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Mar 17 19:53:26.467707 kubelet[1421]: I0317 19:53:26.467554 1421 apiserver.go:52] "Watching apiserver" Mar 17 19:53:26.467707 kubelet[1421]: E0317 19:53:26.467653 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:26.481095 kubelet[1421]: I0317 19:53:26.481041 1421 topology_manager.go:215] "Topology Admit Handler" podUID="21544e81-eda6-424c-969b-1c7e79cee499" podNamespace="kube-system" podName="cilium-gnv5r" Mar 17 19:53:26.481593 kubelet[1421]: I0317 19:53:26.481555 1421 topology_manager.go:215] "Topology Admit Handler" podUID="7f795fa8-91bf-4458-b877-0b55268f8350" podNamespace="kube-system" podName="kube-proxy-hjrsp" Mar 17 19:53:26.493645 systemd[1]: Created slice kubepods-burstable-pod21544e81_eda6_424c_969b_1c7e79cee499.slice. Mar 17 19:53:26.496732 kubelet[1421]: I0317 19:53:26.496612 1421 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 19:53:26.507445 kubelet[1421]: I0317 19:53:26.507404 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f795fa8-91bf-4458-b877-0b55268f8350-xtables-lock\") pod \"kube-proxy-hjrsp\" (UID: \"7f795fa8-91bf-4458-b877-0b55268f8350\") " pod="kube-system/kube-proxy-hjrsp" Mar 17 19:53:26.507539 kubelet[1421]: I0317 19:53:26.507465 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-cilium-cgroup\") pod \"cilium-gnv5r\" (UID: \"21544e81-eda6-424c-969b-1c7e79cee499\") " pod="kube-system/cilium-gnv5r" Mar 17 19:53:26.507539 kubelet[1421]: I0317 19:53:26.507511 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-host-proc-sys-kernel\") pod \"cilium-gnv5r\" (UID: \"21544e81-eda6-424c-969b-1c7e79cee499\") " pod="kube-system/cilium-gnv5r" Mar 17 19:53:26.507627 kubelet[1421]: I0317 19:53:26.507551 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-host-proc-sys-net\") pod \"cilium-gnv5r\" (UID: \"21544e81-eda6-424c-969b-1c7e79cee499\") " pod="kube-system/cilium-gnv5r" Mar 17 19:53:26.507627 kubelet[1421]: I0317 19:53:26.507588 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/21544e81-eda6-424c-969b-1c7e79cee499-hubble-tls\") pod \"cilium-gnv5r\" (UID: \"21544e81-eda6-424c-969b-1c7e79cee499\") " pod="kube-system/cilium-gnv5r" Mar 17 19:53:26.507696 kubelet[1421]: I0317 19:53:26.507625 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-cilium-run\") pod \"cilium-gnv5r\" (UID: \"21544e81-eda6-424c-969b-1c7e79cee499\") " pod="kube-system/cilium-gnv5r" Mar 17 19:53:26.507696 kubelet[1421]: I0317 19:53:26.507661 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/21544e81-eda6-424c-969b-1c7e79cee499-clustermesh-secrets\") pod \"cilium-gnv5r\" (UID: \"21544e81-eda6-424c-969b-1c7e79cee499\") " pod="kube-system/cilium-gnv5r" Mar 17 19:53:26.507764 kubelet[1421]: I0317 19:53:26.507697 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgcgn\" (UniqueName: \"kubernetes.io/projected/21544e81-eda6-424c-969b-1c7e79cee499-kube-api-access-cgcgn\") pod \"cilium-gnv5r\" (UID: \"21544e81-eda6-424c-969b-1c7e79cee499\") " pod="kube-system/cilium-gnv5r" Mar 17 19:53:26.507764 kubelet[1421]: I0317 19:53:26.507741 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f795fa8-91bf-4458-b877-0b55268f8350-lib-modules\") pod \"kube-proxy-hjrsp\" (UID: \"7f795fa8-91bf-4458-b877-0b55268f8350\") " pod="kube-system/kube-proxy-hjrsp" Mar 17 19:53:26.507849 kubelet[1421]: I0317 19:53:26.507781 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhd8x\" (UniqueName: \"kubernetes.io/projected/7f795fa8-91bf-4458-b877-0b55268f8350-kube-api-access-zhd8x\") pod \"kube-proxy-hjrsp\" (UID: \"7f795fa8-91bf-4458-b877-0b55268f8350\") " pod="kube-system/kube-proxy-hjrsp" Mar 17 19:53:26.507885 kubelet[1421]: I0317 19:53:26.507853 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-hostproc\") pod \"cilium-gnv5r\" (UID: \"21544e81-eda6-424c-969b-1c7e79cee499\") " pod="kube-system/cilium-gnv5r" Mar 17 19:53:26.507917 kubelet[1421]: I0317 19:53:26.507892 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-lib-modules\") pod \"cilium-gnv5r\" (UID: \"21544e81-eda6-424c-969b-1c7e79cee499\") " pod="kube-system/cilium-gnv5r" Mar 17 19:53:26.507950 kubelet[1421]: I0317 19:53:26.507929 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-etc-cni-netd\") pod \"cilium-gnv5r\" (UID: \"21544e81-eda6-424c-969b-1c7e79cee499\") " pod="kube-system/cilium-gnv5r" Mar 17 19:53:26.507981 kubelet[1421]: I0317 19:53:26.507967 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-xtables-lock\") pod \"cilium-gnv5r\" (UID: \"21544e81-eda6-424c-969b-1c7e79cee499\") " pod="kube-system/cilium-gnv5r" Mar 17 19:53:26.508395 kubelet[1421]: I0317 19:53:26.508005 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/21544e81-eda6-424c-969b-1c7e79cee499-cilium-config-path\") pod \"cilium-gnv5r\" (UID: \"21544e81-eda6-424c-969b-1c7e79cee499\") " pod="kube-system/cilium-gnv5r" Mar 17 19:53:26.508395 kubelet[1421]: I0317 19:53:26.508055 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7f795fa8-91bf-4458-b877-0b55268f8350-kube-proxy\") pod \"kube-proxy-hjrsp\" (UID: \"7f795fa8-91bf-4458-b877-0b55268f8350\") " pod="kube-system/kube-proxy-hjrsp" Mar 17 19:53:26.508395 kubelet[1421]: I0317 19:53:26.508101 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-bpf-maps\") pod \"cilium-gnv5r\" (UID: \"21544e81-eda6-424c-969b-1c7e79cee499\") " pod="kube-system/cilium-gnv5r" Mar 17 19:53:26.508158 systemd[1]: Created slice kubepods-besteffort-pod7f795fa8_91bf_4458_b877_0b55268f8350.slice. Mar 17 19:53:26.508938 kubelet[1421]: I0317 19:53:26.508899 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-cni-path\") pod \"cilium-gnv5r\" (UID: \"21544e81-eda6-424c-969b-1c7e79cee499\") " pod="kube-system/cilium-gnv5r" Mar 17 19:53:26.804507 env[1154]: time="2025-03-17T19:53:26.803912738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gnv5r,Uid:21544e81-eda6-424c-969b-1c7e79cee499,Namespace:kube-system,Attempt:0,}" Mar 17 19:53:26.819326 env[1154]: time="2025-03-17T19:53:26.818533110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hjrsp,Uid:7f795fa8-91bf-4458-b877-0b55268f8350,Namespace:kube-system,Attempt:0,}" Mar 17 19:53:27.468386 kubelet[1421]: E0317 19:53:27.468260 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:27.622491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2611769094.mount: Deactivated successfully. Mar 17 19:53:27.630406 env[1154]: time="2025-03-17T19:53:27.627285391Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:53:27.630406 env[1154]: time="2025-03-17T19:53:27.629504567Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:53:27.633965 env[1154]: time="2025-03-17T19:53:27.633886974Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:53:27.638025 env[1154]: time="2025-03-17T19:53:27.637968996Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:53:27.643213 env[1154]: time="2025-03-17T19:53:27.643163269Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:53:27.645408 env[1154]: time="2025-03-17T19:53:27.645317871Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:53:27.651650 env[1154]: time="2025-03-17T19:53:27.651562411Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:53:27.654638 env[1154]: time="2025-03-17T19:53:27.654539189Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:53:27.718777 env[1154]: time="2025-03-17T19:53:27.718492662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 19:53:27.718777 env[1154]: time="2025-03-17T19:53:27.718691467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 19:53:27.720070 env[1154]: time="2025-03-17T19:53:27.719924731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 19:53:27.720988 env[1154]: time="2025-03-17T19:53:27.720871809Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/de77e5a124618fa8cd9f59b85658fd4d207d0c0a081afce95043e35b6beafb95 pid=1474 runtime=io.containerd.runc.v2 Mar 17 19:53:27.728911 env[1154]: time="2025-03-17T19:53:27.728837501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 19:53:27.728911 env[1154]: time="2025-03-17T19:53:27.728883270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 19:53:27.729057 env[1154]: time="2025-03-17T19:53:27.728897006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 19:53:27.729057 env[1154]: time="2025-03-17T19:53:27.729001519Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4614e6b34c394d395b108c2efc98e1efcd90423546e6ab9eda07921a97e97529 pid=1492 runtime=io.containerd.runc.v2 Mar 17 19:53:27.749162 systemd[1]: Started cri-containerd-de77e5a124618fa8cd9f59b85658fd4d207d0c0a081afce95043e35b6beafb95.scope. Mar 17 19:53:27.752452 systemd[1]: Started cri-containerd-4614e6b34c394d395b108c2efc98e1efcd90423546e6ab9eda07921a97e97529.scope. Mar 17 19:53:27.785280 env[1154]: time="2025-03-17T19:53:27.785234175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hjrsp,Uid:7f795fa8-91bf-4458-b877-0b55268f8350,Namespace:kube-system,Attempt:0,} returns sandbox id \"de77e5a124618fa8cd9f59b85658fd4d207d0c0a081afce95043e35b6beafb95\"" Mar 17 19:53:27.787744 env[1154]: time="2025-03-17T19:53:27.787694892Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 19:53:27.792964 env[1154]: time="2025-03-17T19:53:27.792925735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gnv5r,Uid:21544e81-eda6-424c-969b-1c7e79cee499,Namespace:kube-system,Attempt:0,} returns sandbox id \"4614e6b34c394d395b108c2efc98e1efcd90423546e6ab9eda07921a97e97529\"" Mar 17 19:53:28.468645 kubelet[1421]: E0317 19:53:28.468574 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:29.297302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3729221847.mount: Deactivated successfully. Mar 17 19:53:29.469543 kubelet[1421]: E0317 19:53:29.469442 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:30.130086 env[1154]: time="2025-03-17T19:53:30.129948052Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:53:30.138888 env[1154]: time="2025-03-17T19:53:30.138800152Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:53:30.142428 env[1154]: time="2025-03-17T19:53:30.142326326Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:53:30.145107 env[1154]: time="2025-03-17T19:53:30.145024171Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:53:30.146304 env[1154]: time="2025-03-17T19:53:30.146224637Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 17 19:53:30.149774 env[1154]: time="2025-03-17T19:53:30.148600963Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 19:53:30.150149 env[1154]: time="2025-03-17T19:53:30.150020962Z" level=info msg="CreateContainer within sandbox \"de77e5a124618fa8cd9f59b85658fd4d207d0c0a081afce95043e35b6beafb95\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 19:53:30.174149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3269047120.mount: Deactivated successfully. Mar 17 19:53:30.182586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3622852201.mount: Deactivated successfully. Mar 17 19:53:30.183824 env[1154]: time="2025-03-17T19:53:30.183775731Z" level=info msg="CreateContainer within sandbox \"de77e5a124618fa8cd9f59b85658fd4d207d0c0a081afce95043e35b6beafb95\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"abb6413c3829a24cc250f7f9b203dbbf6675a3c1fc0423ba789408c7696ea61e\"" Mar 17 19:53:30.184478 env[1154]: time="2025-03-17T19:53:30.184440423Z" level=info msg="StartContainer for \"abb6413c3829a24cc250f7f9b203dbbf6675a3c1fc0423ba789408c7696ea61e\"" Mar 17 19:53:30.221306 systemd[1]: Started cri-containerd-abb6413c3829a24cc250f7f9b203dbbf6675a3c1fc0423ba789408c7696ea61e.scope. Mar 17 19:53:30.265481 env[1154]: time="2025-03-17T19:53:30.265423696Z" level=info msg="StartContainer for \"abb6413c3829a24cc250f7f9b203dbbf6675a3c1fc0423ba789408c7696ea61e\" returns successfully" Mar 17 19:53:30.471306 kubelet[1421]: E0317 19:53:30.471092 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:30.723080 kubelet[1421]: I0317 19:53:30.722948 1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hjrsp" podStartSLOduration=4.362033372 podStartE2EDuration="6.722931195s" podCreationTimestamp="2025-03-17 19:53:24 +0000 UTC" firstStartedPulling="2025-03-17 19:53:27.786976988 +0000 UTC m=+4.184639958" lastFinishedPulling="2025-03-17 19:53:30.147874821 +0000 UTC m=+6.545537781" observedRunningTime="2025-03-17 19:53:30.722420811 +0000 UTC m=+7.120083781" watchObservedRunningTime="2025-03-17 19:53:30.722931195 +0000 UTC m=+7.120594155" Mar 17 19:53:31.472338 kubelet[1421]: E0317 19:53:31.472290 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:32.304969 update_engine[1148]: I0317 19:53:32.304885 1148 update_attempter.cc:509] Updating boot flags... Mar 17 19:53:32.473701 kubelet[1421]: E0317 19:53:32.473647 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:33.474624 kubelet[1421]: E0317 19:53:33.474550 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:34.474786 kubelet[1421]: E0317 19:53:34.474712 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:35.476611 kubelet[1421]: E0317 19:53:35.476540 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:36.478417 kubelet[1421]: E0317 19:53:36.478247 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:37.478981 kubelet[1421]: E0317 19:53:37.478937 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:37.488668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3268399534.mount: Deactivated successfully. Mar 17 19:53:38.479295 kubelet[1421]: E0317 19:53:38.479227 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:39.479872 kubelet[1421]: E0317 19:53:39.479815 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:40.480500 kubelet[1421]: E0317 19:53:40.480450 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:41.481615 kubelet[1421]: E0317 19:53:41.481544 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:41.944569 env[1154]: time="2025-03-17T19:53:41.944461801Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:53:41.947456 env[1154]: time="2025-03-17T19:53:41.947404608Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:53:41.952802 env[1154]: time="2025-03-17T19:53:41.952708406Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:53:41.953612 env[1154]: time="2025-03-17T19:53:41.953505152Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 17 19:53:41.958987 env[1154]: time="2025-03-17T19:53:41.958898220Z" level=info msg="CreateContainer within sandbox \"4614e6b34c394d395b108c2efc98e1efcd90423546e6ab9eda07921a97e97529\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 19:53:41.981766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4211183170.mount: Deactivated successfully. Mar 17 19:53:42.003956 env[1154]: time="2025-03-17T19:53:42.003879881Z" level=info msg="CreateContainer within sandbox \"4614e6b34c394d395b108c2efc98e1efcd90423546e6ab9eda07921a97e97529\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6889ced94f56209cbdf79084302cfa75edf5fd70eba179e1d1953f283ca313da\"" Mar 17 19:53:42.005771 env[1154]: time="2025-03-17T19:53:42.005676394Z" level=info msg="StartContainer for \"6889ced94f56209cbdf79084302cfa75edf5fd70eba179e1d1953f283ca313da\"" Mar 17 19:53:42.057499 systemd[1]: Started cri-containerd-6889ced94f56209cbdf79084302cfa75edf5fd70eba179e1d1953f283ca313da.scope. Mar 17 19:53:42.109010 env[1154]: time="2025-03-17T19:53:42.108945114Z" level=info msg="StartContainer for \"6889ced94f56209cbdf79084302cfa75edf5fd70eba179e1d1953f283ca313da\" returns successfully" Mar 17 19:53:42.116792 systemd[1]: cri-containerd-6889ced94f56209cbdf79084302cfa75edf5fd70eba179e1d1953f283ca313da.scope: Deactivated successfully. Mar 17 19:53:42.482425 kubelet[1421]: E0317 19:53:42.482323 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:42.975089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6889ced94f56209cbdf79084302cfa75edf5fd70eba179e1d1953f283ca313da-rootfs.mount: Deactivated successfully. Mar 17 19:53:43.308212 env[1154]: time="2025-03-17T19:53:43.307808470Z" level=info msg="shim disconnected" id=6889ced94f56209cbdf79084302cfa75edf5fd70eba179e1d1953f283ca313da Mar 17 19:53:43.308212 env[1154]: time="2025-03-17T19:53:43.307901567Z" level=warning msg="cleaning up after shim disconnected" id=6889ced94f56209cbdf79084302cfa75edf5fd70eba179e1d1953f283ca313da namespace=k8s.io Mar 17 19:53:43.308212 env[1154]: time="2025-03-17T19:53:43.307925352Z" level=info msg="cleaning up dead shim" Mar 17 19:53:43.326855 env[1154]: time="2025-03-17T19:53:43.326752501Z" level=warning msg="cleanup warnings time=\"2025-03-17T19:53:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1774 runtime=io.containerd.runc.v2\n" Mar 17 19:53:43.482915 kubelet[1421]: E0317 19:53:43.482868 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:43.743887 env[1154]: time="2025-03-17T19:53:43.743817819Z" level=info msg="CreateContainer within sandbox \"4614e6b34c394d395b108c2efc98e1efcd90423546e6ab9eda07921a97e97529\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 19:53:43.777906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1923470579.mount: Deactivated successfully. Mar 17 19:53:43.795976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2956995973.mount: Deactivated successfully. Mar 17 19:53:43.807988 env[1154]: time="2025-03-17T19:53:43.807815909Z" level=info msg="CreateContainer within sandbox \"4614e6b34c394d395b108c2efc98e1efcd90423546e6ab9eda07921a97e97529\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1d8e4146bd593c8c8f96faa736e8fd0a2b1192b61df11d7141127dbb4a6e513c\"" Mar 17 19:53:43.809150 env[1154]: time="2025-03-17T19:53:43.809089328Z" level=info msg="StartContainer for \"1d8e4146bd593c8c8f96faa736e8fd0a2b1192b61df11d7141127dbb4a6e513c\"" Mar 17 19:53:43.843932 systemd[1]: Started cri-containerd-1d8e4146bd593c8c8f96faa736e8fd0a2b1192b61df11d7141127dbb4a6e513c.scope. Mar 17 19:53:43.899843 env[1154]: time="2025-03-17T19:53:43.898622094Z" level=info msg="StartContainer for \"1d8e4146bd593c8c8f96faa736e8fd0a2b1192b61df11d7141127dbb4a6e513c\" returns successfully" Mar 17 19:53:43.905200 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 19:53:43.905680 systemd[1]: Stopped systemd-sysctl.service. Mar 17 19:53:43.905977 systemd[1]: Stopping systemd-sysctl.service... Mar 17 19:53:43.907701 systemd[1]: Starting systemd-sysctl.service... Mar 17 19:53:43.913148 systemd[1]: cri-containerd-1d8e4146bd593c8c8f96faa736e8fd0a2b1192b61df11d7141127dbb4a6e513c.scope: Deactivated successfully. Mar 17 19:53:43.917724 systemd[1]: Finished systemd-sysctl.service. Mar 17 19:53:43.943057 env[1154]: time="2025-03-17T19:53:43.943012834Z" level=info msg="shim disconnected" id=1d8e4146bd593c8c8f96faa736e8fd0a2b1192b61df11d7141127dbb4a6e513c Mar 17 19:53:43.943305 env[1154]: time="2025-03-17T19:53:43.943284349Z" level=warning msg="cleaning up after shim disconnected" id=1d8e4146bd593c8c8f96faa736e8fd0a2b1192b61df11d7141127dbb4a6e513c namespace=k8s.io Mar 17 19:53:43.943414 env[1154]: time="2025-03-17T19:53:43.943399799Z" level=info msg="cleaning up dead shim" Mar 17 19:53:43.950230 env[1154]: time="2025-03-17T19:53:43.950206123Z" level=warning msg="cleanup warnings time=\"2025-03-17T19:53:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1839 runtime=io.containerd.runc.v2\n" Mar 17 19:53:44.467208 kubelet[1421]: E0317 19:53:44.467127 1421 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:44.483859 kubelet[1421]: E0317 19:53:44.483723 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:44.751102 env[1154]: time="2025-03-17T19:53:44.750918380Z" level=info msg="CreateContainer within sandbox \"4614e6b34c394d395b108c2efc98e1efcd90423546e6ab9eda07921a97e97529\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 19:53:44.784699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount238323313.mount: Deactivated successfully. Mar 17 19:53:44.802726 env[1154]: time="2025-03-17T19:53:44.802651836Z" level=info msg="CreateContainer within sandbox \"4614e6b34c394d395b108c2efc98e1efcd90423546e6ab9eda07921a97e97529\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"26f359f96f2a750e22345a45f08b83a7f45068e47f101ade9ec18ec9f3c1642d\"" Mar 17 19:53:44.804087 env[1154]: time="2025-03-17T19:53:44.804035782Z" level=info msg="StartContainer for \"26f359f96f2a750e22345a45f08b83a7f45068e47f101ade9ec18ec9f3c1642d\"" Mar 17 19:53:44.858311 systemd[1]: Started cri-containerd-26f359f96f2a750e22345a45f08b83a7f45068e47f101ade9ec18ec9f3c1642d.scope. Mar 17 19:53:44.890318 systemd[1]: cri-containerd-26f359f96f2a750e22345a45f08b83a7f45068e47f101ade9ec18ec9f3c1642d.scope: Deactivated successfully. Mar 17 19:53:44.896587 env[1154]: time="2025-03-17T19:53:44.896541599Z" level=info msg="StartContainer for \"26f359f96f2a750e22345a45f08b83a7f45068e47f101ade9ec18ec9f3c1642d\" returns successfully" Mar 17 19:53:44.924991 env[1154]: time="2025-03-17T19:53:44.924911506Z" level=info msg="shim disconnected" id=26f359f96f2a750e22345a45f08b83a7f45068e47f101ade9ec18ec9f3c1642d Mar 17 19:53:44.924991 env[1154]: time="2025-03-17T19:53:44.924970398Z" level=warning msg="cleaning up after shim disconnected" id=26f359f96f2a750e22345a45f08b83a7f45068e47f101ade9ec18ec9f3c1642d namespace=k8s.io Mar 17 19:53:44.924991 env[1154]: time="2025-03-17T19:53:44.924980788Z" level=info msg="cleaning up dead shim" Mar 17 19:53:44.933398 env[1154]: time="2025-03-17T19:53:44.933327124Z" level=warning msg="cleanup warnings time=\"2025-03-17T19:53:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1898 runtime=io.containerd.runc.v2\n" Mar 17 19:53:44.974703 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26f359f96f2a750e22345a45f08b83a7f45068e47f101ade9ec18ec9f3c1642d-rootfs.mount: Deactivated successfully. Mar 17 19:53:45.484578 kubelet[1421]: E0317 19:53:45.484428 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:45.757524 env[1154]: time="2025-03-17T19:53:45.757446911Z" level=info msg="CreateContainer within sandbox \"4614e6b34c394d395b108c2efc98e1efcd90423546e6ab9eda07921a97e97529\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 19:53:45.800256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1380913120.mount: Deactivated successfully. Mar 17 19:53:45.818984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2930653616.mount: Deactivated successfully. Mar 17 19:53:45.827022 env[1154]: time="2025-03-17T19:53:45.826926543Z" level=info msg="CreateContainer within sandbox \"4614e6b34c394d395b108c2efc98e1efcd90423546e6ab9eda07921a97e97529\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"587229cdc0fbb0f41f9747d7ee032bc40e8db913a3a9e3f22860f098080eaf2c\"" Mar 17 19:53:45.828430 env[1154]: time="2025-03-17T19:53:45.828338290Z" level=info msg="StartContainer for \"587229cdc0fbb0f41f9747d7ee032bc40e8db913a3a9e3f22860f098080eaf2c\"" Mar 17 19:53:45.867111 systemd[1]: Started cri-containerd-587229cdc0fbb0f41f9747d7ee032bc40e8db913a3a9e3f22860f098080eaf2c.scope. Mar 17 19:53:45.903930 systemd[1]: cri-containerd-587229cdc0fbb0f41f9747d7ee032bc40e8db913a3a9e3f22860f098080eaf2c.scope: Deactivated successfully. Mar 17 19:53:45.913710 env[1154]: time="2025-03-17T19:53:45.913678124Z" level=info msg="StartContainer for \"587229cdc0fbb0f41f9747d7ee032bc40e8db913a3a9e3f22860f098080eaf2c\" returns successfully" Mar 17 19:53:45.935818 env[1154]: time="2025-03-17T19:53:45.935771573Z" level=info msg="shim disconnected" id=587229cdc0fbb0f41f9747d7ee032bc40e8db913a3a9e3f22860f098080eaf2c Mar 17 19:53:45.936064 env[1154]: time="2025-03-17T19:53:45.936045643Z" level=warning msg="cleaning up after shim disconnected" id=587229cdc0fbb0f41f9747d7ee032bc40e8db913a3a9e3f22860f098080eaf2c namespace=k8s.io Mar 17 19:53:45.936136 env[1154]: time="2025-03-17T19:53:45.936121797Z" level=info msg="cleaning up dead shim" Mar 17 19:53:45.945115 env[1154]: time="2025-03-17T19:53:45.945088167Z" level=warning msg="cleanup warnings time=\"2025-03-17T19:53:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1953 runtime=io.containerd.runc.v2\n" Mar 17 19:53:46.484921 kubelet[1421]: E0317 19:53:46.484809 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:46.766872 env[1154]: time="2025-03-17T19:53:46.766765410Z" level=info msg="CreateContainer within sandbox \"4614e6b34c394d395b108c2efc98e1efcd90423546e6ab9eda07921a97e97529\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 19:53:46.800180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount179623863.mount: Deactivated successfully. Mar 17 19:53:46.825655 env[1154]: time="2025-03-17T19:53:46.825578493Z" level=info msg="CreateContainer within sandbox \"4614e6b34c394d395b108c2efc98e1efcd90423546e6ab9eda07921a97e97529\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"908014fb161c35ecbfb0a3a69d9fcdc5181515adb5956a0d73e64f41d913f16e\"" Mar 17 19:53:46.827451 env[1154]: time="2025-03-17T19:53:46.827332126Z" level=info msg="StartContainer for \"908014fb161c35ecbfb0a3a69d9fcdc5181515adb5956a0d73e64f41d913f16e\"" Mar 17 19:53:46.861431 systemd[1]: Started cri-containerd-908014fb161c35ecbfb0a3a69d9fcdc5181515adb5956a0d73e64f41d913f16e.scope. Mar 17 19:53:46.918832 env[1154]: time="2025-03-17T19:53:46.918795022Z" level=info msg="StartContainer for \"908014fb161c35ecbfb0a3a69d9fcdc5181515adb5956a0d73e64f41d913f16e\" returns successfully" Mar 17 19:53:47.090206 kubelet[1421]: I0317 19:53:47.090018 1421 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 19:53:47.466399 kernel: Initializing XFRM netlink socket Mar 17 19:53:47.485980 kubelet[1421]: E0317 19:53:47.485887 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:48.486273 kubelet[1421]: E0317 19:53:48.486162 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:49.234934 systemd-networkd[989]: cilium_host: Link UP Mar 17 19:53:49.246258 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Mar 17 19:53:49.246498 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Mar 17 19:53:49.248165 systemd-networkd[989]: cilium_net: Link UP Mar 17 19:53:49.256647 systemd-networkd[989]: cilium_net: Gained carrier Mar 17 19:53:49.258526 systemd-networkd[989]: cilium_host: Gained carrier Mar 17 19:53:49.377548 systemd-networkd[989]: cilium_vxlan: Link UP Mar 17 19:53:49.377783 systemd-networkd[989]: cilium_vxlan: Gained carrier Mar 17 19:53:49.487168 kubelet[1421]: E0317 19:53:49.486957 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:49.546663 systemd-networkd[989]: cilium_host: Gained IPv6LL Mar 17 19:53:49.635496 kernel: NET: Registered PF_ALG protocol family Mar 17 19:53:50.082713 systemd-networkd[989]: cilium_net: Gained IPv6LL Mar 17 19:53:50.487707 kubelet[1421]: E0317 19:53:50.487621 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:50.514283 systemd-networkd[989]: lxc_health: Link UP Mar 17 19:53:50.536510 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 19:53:50.536343 systemd-networkd[989]: lxc_health: Gained carrier Mar 17 19:53:50.786761 systemd-networkd[989]: cilium_vxlan: Gained IPv6LL Mar 17 19:53:50.838545 kubelet[1421]: I0317 19:53:50.838437 1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gnv5r" podStartSLOduration=12.677650208 podStartE2EDuration="26.838395095s" podCreationTimestamp="2025-03-17 19:53:24 +0000 UTC" firstStartedPulling="2025-03-17 19:53:27.794887201 +0000 UTC m=+4.192550171" lastFinishedPulling="2025-03-17 19:53:41.955632047 +0000 UTC m=+18.353295058" observedRunningTime="2025-03-17 19:53:47.805743107 +0000 UTC m=+24.203406117" watchObservedRunningTime="2025-03-17 19:53:50.838395095 +0000 UTC m=+27.236058065" Mar 17 19:53:51.358063 kubelet[1421]: I0317 19:53:51.358027 1421 topology_manager.go:215] "Topology Admit Handler" podUID="7c2a7c2d-f17b-416a-a4c6-ccd12c852b02" podNamespace="default" podName="nginx-deployment-85f456d6dd-znc8z" Mar 17 19:53:51.369974 systemd[1]: Created slice kubepods-besteffort-pod7c2a7c2d_f17b_416a_a4c6_ccd12c852b02.slice. Mar 17 19:53:51.392801 kubelet[1421]: I0317 19:53:51.392748 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzgvb\" (UniqueName: \"kubernetes.io/projected/7c2a7c2d-f17b-416a-a4c6-ccd12c852b02-kube-api-access-hzgvb\") pod \"nginx-deployment-85f456d6dd-znc8z\" (UID: \"7c2a7c2d-f17b-416a-a4c6-ccd12c852b02\") " pod="default/nginx-deployment-85f456d6dd-znc8z" Mar 17 19:53:51.488646 kubelet[1421]: E0317 19:53:51.488594 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:51.676987 env[1154]: time="2025-03-17T19:53:51.676847307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-znc8z,Uid:7c2a7c2d-f17b-416a-a4c6-ccd12c852b02,Namespace:default,Attempt:0,}" Mar 17 19:53:51.768527 systemd-networkd[989]: lxc071d69d0f225: Link UP Mar 17 19:53:51.772392 kernel: eth0: renamed from tmpc22a9 Mar 17 19:53:51.780677 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 19:53:51.780757 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc071d69d0f225: link becomes ready Mar 17 19:53:51.781286 systemd-networkd[989]: lxc071d69d0f225: Gained carrier Mar 17 19:53:51.905420 systemd-networkd[989]: lxc_health: Gained IPv6LL Mar 17 19:53:52.489153 kubelet[1421]: E0317 19:53:52.489083 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:53.121383 systemd-networkd[989]: lxc071d69d0f225: Gained IPv6LL Mar 17 19:53:53.490215 kubelet[1421]: E0317 19:53:53.490165 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:54.491436 kubelet[1421]: E0317 19:53:54.491388 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:55.492331 kubelet[1421]: E0317 19:53:55.492236 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:56.041090 env[1154]: time="2025-03-17T19:53:56.041008309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 19:53:56.041441 env[1154]: time="2025-03-17T19:53:56.041059957Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 19:53:56.041441 env[1154]: time="2025-03-17T19:53:56.041080004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 19:53:56.041441 env[1154]: time="2025-03-17T19:53:56.041310749Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c22a98cb06816a87c4857089ca0744db6e77b8b2df909352d136066df7c008c8 pid=2478 runtime=io.containerd.runc.v2 Mar 17 19:53:56.056397 systemd[1]: Started cri-containerd-c22a98cb06816a87c4857089ca0744db6e77b8b2df909352d136066df7c008c8.scope. Mar 17 19:53:56.062737 systemd[1]: run-containerd-runc-k8s.io-c22a98cb06816a87c4857089ca0744db6e77b8b2df909352d136066df7c008c8-runc.9vqWhM.mount: Deactivated successfully. Mar 17 19:53:56.105108 env[1154]: time="2025-03-17T19:53:56.105064783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-znc8z,Uid:7c2a7c2d-f17b-416a-a4c6-ccd12c852b02,Namespace:default,Attempt:0,} returns sandbox id \"c22a98cb06816a87c4857089ca0744db6e77b8b2df909352d136066df7c008c8\"" Mar 17 19:53:56.107483 env[1154]: time="2025-03-17T19:53:56.107459317Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 19:53:56.492773 kubelet[1421]: E0317 19:53:56.492713 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:57.493279 kubelet[1421]: E0317 19:53:57.493203 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:58.493378 kubelet[1421]: E0317 19:53:58.493327 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:59.493934 kubelet[1421]: E0317 19:53:59.493874 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:53:59.974810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2257479662.mount: Deactivated successfully. Mar 17 19:54:00.494160 kubelet[1421]: E0317 19:54:00.494123 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:01.494283 kubelet[1421]: E0317 19:54:01.494207 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:02.170570 env[1154]: time="2025-03-17T19:54:02.170495963Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:54:02.173843 env[1154]: time="2025-03-17T19:54:02.173790764Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:54:02.178072 env[1154]: time="2025-03-17T19:54:02.178003414Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:54:02.182464 env[1154]: time="2025-03-17T19:54:02.182407824Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:54:02.184469 env[1154]: time="2025-03-17T19:54:02.184399874Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\"" Mar 17 19:54:02.190690 env[1154]: time="2025-03-17T19:54:02.190597601Z" level=info msg="CreateContainer within sandbox \"c22a98cb06816a87c4857089ca0744db6e77b8b2df909352d136066df7c008c8\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Mar 17 19:54:02.218775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1218319210.mount: Deactivated successfully. Mar 17 19:54:02.234545 env[1154]: time="2025-03-17T19:54:02.234428385Z" level=info msg="CreateContainer within sandbox \"c22a98cb06816a87c4857089ca0744db6e77b8b2df909352d136066df7c008c8\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"679508b4991d5d9113b6aececc8f79c08630c2315b4a39fc46c36dce7d1e51d9\"" Mar 17 19:54:02.235507 env[1154]: time="2025-03-17T19:54:02.235456340Z" level=info msg="StartContainer for \"679508b4991d5d9113b6aececc8f79c08630c2315b4a39fc46c36dce7d1e51d9\"" Mar 17 19:54:02.281954 systemd[1]: Started cri-containerd-679508b4991d5d9113b6aececc8f79c08630c2315b4a39fc46c36dce7d1e51d9.scope. Mar 17 19:54:02.330479 env[1154]: time="2025-03-17T19:54:02.330438379Z" level=info msg="StartContainer for \"679508b4991d5d9113b6aececc8f79c08630c2315b4a39fc46c36dce7d1e51d9\" returns successfully" Mar 17 19:54:02.495485 kubelet[1421]: E0317 19:54:02.495410 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:02.895189 kubelet[1421]: I0317 19:54:02.894562 1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-znc8z" podStartSLOduration=5.81370532 podStartE2EDuration="11.894498883s" podCreationTimestamp="2025-03-17 19:53:51 +0000 UTC" firstStartedPulling="2025-03-17 19:53:56.10690958 +0000 UTC m=+32.504572540" lastFinishedPulling="2025-03-17 19:54:02.187703092 +0000 UTC m=+38.585366103" observedRunningTime="2025-03-17 19:54:02.893416234 +0000 UTC m=+39.291079254" watchObservedRunningTime="2025-03-17 19:54:02.894498883 +0000 UTC m=+39.292161893" Mar 17 19:54:03.496489 kubelet[1421]: E0317 19:54:03.496432 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:04.467064 kubelet[1421]: E0317 19:54:04.467009 1421 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:04.498000 kubelet[1421]: E0317 19:54:04.497911 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:05.498645 kubelet[1421]: E0317 19:54:05.498584 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:06.499885 kubelet[1421]: E0317 19:54:06.499826 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:07.501104 kubelet[1421]: E0317 19:54:07.501050 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:08.502084 kubelet[1421]: E0317 19:54:08.501991 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:09.503134 kubelet[1421]: E0317 19:54:09.502986 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:10.503685 kubelet[1421]: E0317 19:54:10.503526 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:11.504650 kubelet[1421]: E0317 19:54:11.504559 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:12.506659 kubelet[1421]: E0317 19:54:12.506514 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:13.507529 kubelet[1421]: E0317 19:54:13.507467 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:14.024016 kubelet[1421]: I0317 19:54:14.023954 1421 topology_manager.go:215] "Topology Admit Handler" podUID="0d34db99-bba6-4e05-a8a0-c5163579f57d" podNamespace="default" podName="nfs-server-provisioner-0" Mar 17 19:54:14.040141 systemd[1]: Created slice kubepods-besteffort-pod0d34db99_bba6_4e05_a8a0_c5163579f57d.slice. Mar 17 19:54:14.083759 kubelet[1421]: I0317 19:54:14.083658 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/0d34db99-bba6-4e05-a8a0-c5163579f57d-data\") pod \"nfs-server-provisioner-0\" (UID: \"0d34db99-bba6-4e05-a8a0-c5163579f57d\") " pod="default/nfs-server-provisioner-0" Mar 17 19:54:14.084157 kubelet[1421]: I0317 19:54:14.084116 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8n2m\" (UniqueName: \"kubernetes.io/projected/0d34db99-bba6-4e05-a8a0-c5163579f57d-kube-api-access-s8n2m\") pod \"nfs-server-provisioner-0\" (UID: \"0d34db99-bba6-4e05-a8a0-c5163579f57d\") " pod="default/nfs-server-provisioner-0" Mar 17 19:54:14.352594 env[1154]: time="2025-03-17T19:54:14.351763004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0d34db99-bba6-4e05-a8a0-c5163579f57d,Namespace:default,Attempt:0,}" Mar 17 19:54:14.433254 systemd-networkd[989]: lxcaa5d546a6dfe: Link UP Mar 17 19:54:14.444450 kernel: eth0: renamed from tmp1586d Mar 17 19:54:14.461320 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 19:54:14.461534 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcaa5d546a6dfe: link becomes ready Mar 17 19:54:14.463626 systemd-networkd[989]: lxcaa5d546a6dfe: Gained carrier Mar 17 19:54:14.509489 kubelet[1421]: E0317 19:54:14.509303 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:14.773607 env[1154]: time="2025-03-17T19:54:14.773175522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 19:54:14.773607 env[1154]: time="2025-03-17T19:54:14.773249400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 19:54:14.773607 env[1154]: time="2025-03-17T19:54:14.773273757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 19:54:14.773979 env[1154]: time="2025-03-17T19:54:14.773628022Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1586dbb4cd7f1445e4efadc4ebb69a88aa41700a5f99aa3234cd6d9e9ae613a2 pid=2602 runtime=io.containerd.runc.v2 Mar 17 19:54:14.817822 systemd[1]: Started cri-containerd-1586dbb4cd7f1445e4efadc4ebb69a88aa41700a5f99aa3234cd6d9e9ae613a2.scope. Mar 17 19:54:14.858564 env[1154]: time="2025-03-17T19:54:14.858518792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0d34db99-bba6-4e05-a8a0-c5163579f57d,Namespace:default,Attempt:0,} returns sandbox id \"1586dbb4cd7f1445e4efadc4ebb69a88aa41700a5f99aa3234cd6d9e9ae613a2\"" Mar 17 19:54:14.860642 env[1154]: time="2025-03-17T19:54:14.860614609Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Mar 17 19:54:15.208251 systemd[1]: run-containerd-runc-k8s.io-1586dbb4cd7f1445e4efadc4ebb69a88aa41700a5f99aa3234cd6d9e9ae613a2-runc.LTlvuw.mount: Deactivated successfully. Mar 17 19:54:15.509679 kubelet[1421]: E0317 19:54:15.509584 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:16.258851 systemd-networkd[989]: lxcaa5d546a6dfe: Gained IPv6LL Mar 17 19:54:16.510505 kubelet[1421]: E0317 19:54:16.510270 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:17.510567 kubelet[1421]: E0317 19:54:17.510442 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:18.510807 kubelet[1421]: E0317 19:54:18.510765 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:18.674646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3728153077.mount: Deactivated successfully. Mar 17 19:54:19.510974 kubelet[1421]: E0317 19:54:19.510904 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:20.511937 kubelet[1421]: E0317 19:54:20.511868 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:21.512457 kubelet[1421]: E0317 19:54:21.512340 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:21.742340 env[1154]: time="2025-03-17T19:54:21.742270180Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:54:21.748890 env[1154]: time="2025-03-17T19:54:21.748829215Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:54:21.752409 env[1154]: time="2025-03-17T19:54:21.752325720Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:54:21.758487 env[1154]: time="2025-03-17T19:54:21.758434359Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:54:21.762248 env[1154]: time="2025-03-17T19:54:21.760909576Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Mar 17 19:54:21.767218 env[1154]: time="2025-03-17T19:54:21.766262196Z" level=info msg="CreateContainer within sandbox \"1586dbb4cd7f1445e4efadc4ebb69a88aa41700a5f99aa3234cd6d9e9ae613a2\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Mar 17 19:54:21.778965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount204547017.mount: Deactivated successfully. Mar 17 19:54:21.792755 env[1154]: time="2025-03-17T19:54:21.792640892Z" level=info msg="CreateContainer within sandbox \"1586dbb4cd7f1445e4efadc4ebb69a88aa41700a5f99aa3234cd6d9e9ae613a2\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"82f15286f24417cc0d4e0a167e31af3938545ae4cb160ee305144a78de62a8ab\"" Mar 17 19:54:21.793614 env[1154]: time="2025-03-17T19:54:21.793560007Z" level=info msg="StartContainer for \"82f15286f24417cc0d4e0a167e31af3938545ae4cb160ee305144a78de62a8ab\"" Mar 17 19:54:21.831726 systemd[1]: Started cri-containerd-82f15286f24417cc0d4e0a167e31af3938545ae4cb160ee305144a78de62a8ab.scope. Mar 17 19:54:21.869029 env[1154]: time="2025-03-17T19:54:21.868978917Z" level=info msg="StartContainer for \"82f15286f24417cc0d4e0a167e31af3938545ae4cb160ee305144a78de62a8ab\" returns successfully" Mar 17 19:54:22.025414 kubelet[1421]: I0317 19:54:22.025294 1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.121164411 podStartE2EDuration="8.025275983s" podCreationTimestamp="2025-03-17 19:54:14 +0000 UTC" firstStartedPulling="2025-03-17 19:54:14.860008722 +0000 UTC m=+51.257671682" lastFinishedPulling="2025-03-17 19:54:21.764120244 +0000 UTC m=+58.161783254" observedRunningTime="2025-03-17 19:54:22.024778559 +0000 UTC m=+58.422441559" watchObservedRunningTime="2025-03-17 19:54:22.025275983 +0000 UTC m=+58.422938943" Mar 17 19:54:22.513417 kubelet[1421]: E0317 19:54:22.513302 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:22.778885 systemd[1]: run-containerd-runc-k8s.io-82f15286f24417cc0d4e0a167e31af3938545ae4cb160ee305144a78de62a8ab-runc.YBRzOz.mount: Deactivated successfully. Mar 17 19:54:23.514629 kubelet[1421]: E0317 19:54:23.514573 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:24.466625 kubelet[1421]: E0317 19:54:24.466558 1421 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:24.515963 kubelet[1421]: E0317 19:54:24.515881 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:25.516476 kubelet[1421]: E0317 19:54:25.516342 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:26.517474 kubelet[1421]: E0317 19:54:26.517339 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:27.519520 kubelet[1421]: E0317 19:54:27.519312 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:28.520166 kubelet[1421]: E0317 19:54:28.520087 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:29.520967 kubelet[1421]: E0317 19:54:29.520898 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:30.521803 kubelet[1421]: E0317 19:54:30.521684 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:31.522036 kubelet[1421]: E0317 19:54:31.521968 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:31.763101 kubelet[1421]: I0317 19:54:31.762992 1421 topology_manager.go:215] "Topology Admit Handler" podUID="38bf3e76-8908-4995-8f0d-d5c2c302e84c" podNamespace="default" podName="test-pod-1" Mar 17 19:54:31.775536 systemd[1]: Created slice kubepods-besteffort-pod38bf3e76_8908_4995_8f0d_d5c2c302e84c.slice. Mar 17 19:54:31.818479 kubelet[1421]: I0317 19:54:31.818347 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3ec23ebf-f4af-4e67-abb1-35a6ef77d1a7\" (UniqueName: \"kubernetes.io/nfs/38bf3e76-8908-4995-8f0d-d5c2c302e84c-pvc-3ec23ebf-f4af-4e67-abb1-35a6ef77d1a7\") pod \"test-pod-1\" (UID: \"38bf3e76-8908-4995-8f0d-d5c2c302e84c\") " pod="default/test-pod-1" Mar 17 19:54:31.818479 kubelet[1421]: I0317 19:54:31.818475 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd9qb\" (UniqueName: \"kubernetes.io/projected/38bf3e76-8908-4995-8f0d-d5c2c302e84c-kube-api-access-gd9qb\") pod \"test-pod-1\" (UID: \"38bf3e76-8908-4995-8f0d-d5c2c302e84c\") " pod="default/test-pod-1" Mar 17 19:54:31.998438 kernel: FS-Cache: Loaded Mar 17 19:54:32.080681 kernel: RPC: Registered named UNIX socket transport module. Mar 17 19:54:32.080800 kernel: RPC: Registered udp transport module. Mar 17 19:54:32.080833 kernel: RPC: Registered tcp transport module. Mar 17 19:54:32.080856 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Mar 17 19:54:32.161398 kernel: FS-Cache: Netfs 'nfs' registered for caching Mar 17 19:54:32.437806 kernel: NFS: Registering the id_resolver key type Mar 17 19:54:32.438019 kernel: Key type id_resolver registered Mar 17 19:54:32.438076 kernel: Key type id_legacy registered Mar 17 19:54:32.523235 kubelet[1421]: E0317 19:54:32.523105 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:32.524780 nfsidmap[2723]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Mar 17 19:54:32.535891 nfsidmap[2724]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Mar 17 19:54:32.685876 env[1154]: time="2025-03-17T19:54:32.685794675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:38bf3e76-8908-4995-8f0d-d5c2c302e84c,Namespace:default,Attempt:0,}" Mar 17 19:54:32.768287 systemd-networkd[989]: lxcebc24709e914: Link UP Mar 17 19:54:32.786475 kernel: eth0: renamed from tmpbb68c Mar 17 19:54:32.798098 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 19:54:32.798202 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcebc24709e914: link becomes ready Mar 17 19:54:32.798332 systemd-networkd[989]: lxcebc24709e914: Gained carrier Mar 17 19:54:33.012589 env[1154]: time="2025-03-17T19:54:33.012515380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 19:54:33.012779 env[1154]: time="2025-03-17T19:54:33.012753949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 19:54:33.012891 env[1154]: time="2025-03-17T19:54:33.012863361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 19:54:33.013276 env[1154]: time="2025-03-17T19:54:33.013188360Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb68c9e6130f4f74e72118cad2d921362ca8edc5743ec409b30507a05d94bb24 pid=2751 runtime=io.containerd.runc.v2 Mar 17 19:54:33.039175 systemd[1]: Started cri-containerd-bb68c9e6130f4f74e72118cad2d921362ca8edc5743ec409b30507a05d94bb24.scope. Mar 17 19:54:33.041024 systemd[1]: run-containerd-runc-k8s.io-bb68c9e6130f4f74e72118cad2d921362ca8edc5743ec409b30507a05d94bb24-runc.mvjoBp.mount: Deactivated successfully. Mar 17 19:54:33.094728 env[1154]: time="2025-03-17T19:54:33.094669023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:38bf3e76-8908-4995-8f0d-d5c2c302e84c,Namespace:default,Attempt:0,} returns sandbox id \"bb68c9e6130f4f74e72118cad2d921362ca8edc5743ec409b30507a05d94bb24\"" Mar 17 19:54:33.096852 env[1154]: time="2025-03-17T19:54:33.096816159Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 19:54:33.524079 kubelet[1421]: E0317 19:54:33.523965 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:33.637115 env[1154]: time="2025-03-17T19:54:33.636957321Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:54:33.641343 env[1154]: time="2025-03-17T19:54:33.641266903Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:54:33.645813 env[1154]: time="2025-03-17T19:54:33.645751868Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:54:33.650041 env[1154]: time="2025-03-17T19:54:33.649971323Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:54:33.652663 env[1154]: time="2025-03-17T19:54:33.652555615Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\"" Mar 17 19:54:33.659447 env[1154]: time="2025-03-17T19:54:33.659344806Z" level=info msg="CreateContainer within sandbox \"bb68c9e6130f4f74e72118cad2d921362ca8edc5743ec409b30507a05d94bb24\" for container &ContainerMetadata{Name:test,Attempt:0,}" Mar 17 19:54:33.688316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3983192065.mount: Deactivated successfully. Mar 17 19:54:33.695266 env[1154]: time="2025-03-17T19:54:33.695187166Z" level=info msg="CreateContainer within sandbox \"bb68c9e6130f4f74e72118cad2d921362ca8edc5743ec409b30507a05d94bb24\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"8164d937f4709475a22467024831f382300740446b4cf0744be583f07e32e89c\"" Mar 17 19:54:33.697096 env[1154]: time="2025-03-17T19:54:33.696921693Z" level=info msg="StartContainer for \"8164d937f4709475a22467024831f382300740446b4cf0744be583f07e32e89c\"" Mar 17 19:54:33.731621 systemd[1]: Started cri-containerd-8164d937f4709475a22467024831f382300740446b4cf0744be583f07e32e89c.scope. Mar 17 19:54:33.790893 env[1154]: time="2025-03-17T19:54:33.790776309Z" level=info msg="StartContainer for \"8164d937f4709475a22467024831f382300740446b4cf0744be583f07e32e89c\" returns successfully" Mar 17 19:54:33.858718 systemd-networkd[989]: lxcebc24709e914: Gained IPv6LL Mar 17 19:54:34.072597 kubelet[1421]: I0317 19:54:34.072399 1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.512494813 podStartE2EDuration="18.072333674s" podCreationTimestamp="2025-03-17 19:54:16 +0000 UTC" firstStartedPulling="2025-03-17 19:54:33.096141095 +0000 UTC m=+69.493804055" lastFinishedPulling="2025-03-17 19:54:33.655979906 +0000 UTC m=+70.053642916" observedRunningTime="2025-03-17 19:54:34.071728799 +0000 UTC m=+70.469391809" watchObservedRunningTime="2025-03-17 19:54:34.072333674 +0000 UTC m=+70.469996684" Mar 17 19:54:34.524350 kubelet[1421]: E0317 19:54:34.524283 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:35.524576 kubelet[1421]: E0317 19:54:35.524517 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:36.525673 kubelet[1421]: E0317 19:54:36.525624 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:37.527408 kubelet[1421]: E0317 19:54:37.527320 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:38.529034 kubelet[1421]: E0317 19:54:38.528918 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:39.529462 kubelet[1421]: E0317 19:54:39.529337 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:40.530512 kubelet[1421]: E0317 19:54:40.530449 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:41.532311 kubelet[1421]: E0317 19:54:41.532184 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:42.532645 kubelet[1421]: E0317 19:54:42.532584 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:43.534155 kubelet[1421]: E0317 19:54:43.534105 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:44.458548 env[1154]: time="2025-03-17T19:54:44.458438292Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 19:54:44.466563 kubelet[1421]: E0317 19:54:44.466460 1421 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:44.469709 env[1154]: time="2025-03-17T19:54:44.469625832Z" level=info msg="StopContainer for \"908014fb161c35ecbfb0a3a69d9fcdc5181515adb5956a0d73e64f41d913f16e\" with timeout 2 (s)" Mar 17 19:54:44.470437 env[1154]: time="2025-03-17T19:54:44.470293808Z" level=info msg="Stop container \"908014fb161c35ecbfb0a3a69d9fcdc5181515adb5956a0d73e64f41d913f16e\" with signal terminated" Mar 17 19:54:44.486949 systemd-networkd[989]: lxc_health: Link DOWN Mar 17 19:54:44.486971 systemd-networkd[989]: lxc_health: Lost carrier Mar 17 19:54:44.535517 kubelet[1421]: E0317 19:54:44.535457 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:44.540120 systemd[1]: cri-containerd-908014fb161c35ecbfb0a3a69d9fcdc5181515adb5956a0d73e64f41d913f16e.scope: Deactivated successfully. Mar 17 19:54:44.540646 systemd[1]: cri-containerd-908014fb161c35ecbfb0a3a69d9fcdc5181515adb5956a0d73e64f41d913f16e.scope: Consumed 8.617s CPU time. Mar 17 19:54:44.569070 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-908014fb161c35ecbfb0a3a69d9fcdc5181515adb5956a0d73e64f41d913f16e-rootfs.mount: Deactivated successfully. Mar 17 19:54:44.602441 kubelet[1421]: E0317 19:54:44.602337 1421 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 19:54:45.104266 env[1154]: time="2025-03-17T19:54:45.104167589Z" level=info msg="shim disconnected" id=908014fb161c35ecbfb0a3a69d9fcdc5181515adb5956a0d73e64f41d913f16e Mar 17 19:54:45.104266 env[1154]: time="2025-03-17T19:54:45.104255822Z" level=warning msg="cleaning up after shim disconnected" id=908014fb161c35ecbfb0a3a69d9fcdc5181515adb5956a0d73e64f41d913f16e namespace=k8s.io Mar 17 19:54:45.104692 env[1154]: time="2025-03-17T19:54:45.104278835Z" level=info msg="cleaning up dead shim" Mar 17 19:54:45.121054 env[1154]: time="2025-03-17T19:54:45.120977626Z" level=warning msg="cleanup warnings time=\"2025-03-17T19:54:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2880 runtime=io.containerd.runc.v2\n" Mar 17 19:54:45.126384 env[1154]: time="2025-03-17T19:54:45.126292186Z" level=info msg="StopContainer for \"908014fb161c35ecbfb0a3a69d9fcdc5181515adb5956a0d73e64f41d913f16e\" returns successfully" Mar 17 19:54:45.127704 env[1154]: time="2025-03-17T19:54:45.127654179Z" level=info msg="StopPodSandbox for \"4614e6b34c394d395b108c2efc98e1efcd90423546e6ab9eda07921a97e97529\"" Mar 17 19:54:45.128076 env[1154]: time="2025-03-17T19:54:45.128023984Z" level=info msg="Container to stop \"908014fb161c35ecbfb0a3a69d9fcdc5181515adb5956a0d73e64f41d913f16e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 19:54:45.128258 env[1154]: time="2025-03-17T19:54:45.128214566Z" level=info msg="Container to stop \"6889ced94f56209cbdf79084302cfa75edf5fd70eba179e1d1953f283ca313da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 19:54:45.128499 env[1154]: time="2025-03-17T19:54:45.128451095Z" level=info msg="Container to stop \"587229cdc0fbb0f41f9747d7ee032bc40e8db913a3a9e3f22860f098080eaf2c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 19:54:45.128698 env[1154]: time="2025-03-17T19:54:45.128654291Z" level=info msg="Container to stop \"1d8e4146bd593c8c8f96faa736e8fd0a2b1192b61df11d7141127dbb4a6e513c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 19:54:45.128855 env[1154]: time="2025-03-17T19:54:45.128813626Z" level=info msg="Container to stop \"26f359f96f2a750e22345a45f08b83a7f45068e47f101ade9ec18ec9f3c1642d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 19:54:45.133272 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4614e6b34c394d395b108c2efc98e1efcd90423546e6ab9eda07921a97e97529-shm.mount: Deactivated successfully. Mar 17 19:54:45.146922 systemd[1]: cri-containerd-4614e6b34c394d395b108c2efc98e1efcd90423546e6ab9eda07921a97e97529.scope: Deactivated successfully. Mar 17 19:54:45.188258 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4614e6b34c394d395b108c2efc98e1efcd90423546e6ab9eda07921a97e97529-rootfs.mount: Deactivated successfully. Mar 17 19:54:45.195739 env[1154]: time="2025-03-17T19:54:45.195661266Z" level=info msg="shim disconnected" id=4614e6b34c394d395b108c2efc98e1efcd90423546e6ab9eda07921a97e97529 Mar 17 19:54:45.196100 env[1154]: time="2025-03-17T19:54:45.196024438Z" level=warning msg="cleaning up after shim disconnected" id=4614e6b34c394d395b108c2efc98e1efcd90423546e6ab9eda07921a97e97529 namespace=k8s.io Mar 17 19:54:45.196287 env[1154]: time="2025-03-17T19:54:45.196249495Z" level=info msg="cleaning up dead shim" Mar 17 19:54:45.212012 env[1154]: time="2025-03-17T19:54:45.211934479Z" level=warning msg="cleanup warnings time=\"2025-03-17T19:54:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2910 runtime=io.containerd.runc.v2\n" Mar 17 19:54:45.212641 env[1154]: time="2025-03-17T19:54:45.212588720Z" level=info msg="TearDown network for sandbox \"4614e6b34c394d395b108c2efc98e1efcd90423546e6ab9eda07921a97e97529\" successfully" Mar 17 19:54:45.212793 env[1154]: time="2025-03-17T19:54:45.212643803Z" level=info msg="StopPodSandbox for \"4614e6b34c394d395b108c2efc98e1efcd90423546e6ab9eda07921a97e97529\" returns successfully" Mar 17 19:54:45.334679 kubelet[1421]: I0317 19:54:45.334573 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgcgn\" (UniqueName: \"kubernetes.io/projected/21544e81-eda6-424c-969b-1c7e79cee499-kube-api-access-cgcgn\") pod \"21544e81-eda6-424c-969b-1c7e79cee499\" (UID: \"21544e81-eda6-424c-969b-1c7e79cee499\") " Mar 17 19:54:45.334679 kubelet[1421]: I0317 19:54:45.334648 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-lib-modules\") pod \"21544e81-eda6-424c-969b-1c7e79cee499\" (UID: \"21544e81-eda6-424c-969b-1c7e79cee499\") " Mar 17 19:54:45.335024 kubelet[1421]: I0317 19:54:45.334700 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/21544e81-eda6-424c-969b-1c7e79cee499-clustermesh-secrets\") pod \"21544e81-eda6-424c-969b-1c7e79cee499\" (UID: \"21544e81-eda6-424c-969b-1c7e79cee499\") " Mar 17 19:54:45.335024 kubelet[1421]: I0317 19:54:45.334747 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/21544e81-eda6-424c-969b-1c7e79cee499-cilium-config-path\") pod \"21544e81-eda6-424c-969b-1c7e79cee499\" (UID: \"21544e81-eda6-424c-969b-1c7e79cee499\") " Mar 17 19:54:45.335024 kubelet[1421]: I0317 19:54:45.334789 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-host-proc-sys-kernel\") pod \"21544e81-eda6-424c-969b-1c7e79cee499\" (UID: \"21544e81-eda6-424c-969b-1c7e79cee499\") " Mar 17 19:54:45.335024 kubelet[1421]: I0317 19:54:45.334832 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/21544e81-eda6-424c-969b-1c7e79cee499-hubble-tls\") pod \"21544e81-eda6-424c-969b-1c7e79cee499\" (UID: \"21544e81-eda6-424c-969b-1c7e79cee499\") " Mar 17 19:54:45.335024 kubelet[1421]: I0317 19:54:45.334875 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-hostproc\") pod \"21544e81-eda6-424c-969b-1c7e79cee499\" (UID: \"21544e81-eda6-424c-969b-1c7e79cee499\") " Mar 17 19:54:45.335024 kubelet[1421]: I0317 19:54:45.334912 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-etc-cni-netd\") pod \"21544e81-eda6-424c-969b-1c7e79cee499\" (UID: \"21544e81-eda6-424c-969b-1c7e79cee499\") " Mar 17 19:54:45.335478 kubelet[1421]: I0317 19:54:45.334950 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-bpf-maps\") pod \"21544e81-eda6-424c-969b-1c7e79cee499\" (UID: \"21544e81-eda6-424c-969b-1c7e79cee499\") " Mar 17 19:54:45.335478 kubelet[1421]: I0317 19:54:45.334988 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-cni-path\") pod \"21544e81-eda6-424c-969b-1c7e79cee499\" (UID: \"21544e81-eda6-424c-969b-1c7e79cee499\") " Mar 17 19:54:45.335478 kubelet[1421]: I0317 19:54:45.335058 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-host-proc-sys-net\") pod \"21544e81-eda6-424c-969b-1c7e79cee499\" (UID: \"21544e81-eda6-424c-969b-1c7e79cee499\") " Mar 17 19:54:45.335478 kubelet[1421]: I0317 19:54:45.335098 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-xtables-lock\") pod \"21544e81-eda6-424c-969b-1c7e79cee499\" (UID: \"21544e81-eda6-424c-969b-1c7e79cee499\") " Mar 17 19:54:45.335478 kubelet[1421]: I0317 19:54:45.335152 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-cilium-cgroup\") pod \"21544e81-eda6-424c-969b-1c7e79cee499\" (UID: \"21544e81-eda6-424c-969b-1c7e79cee499\") " Mar 17 19:54:45.335478 kubelet[1421]: I0317 19:54:45.335195 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-cilium-run\") pod \"21544e81-eda6-424c-969b-1c7e79cee499\" (UID: \"21544e81-eda6-424c-969b-1c7e79cee499\") " Mar 17 19:54:45.335904 kubelet[1421]: I0317 19:54:45.335304 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "21544e81-eda6-424c-969b-1c7e79cee499" (UID: "21544e81-eda6-424c-969b-1c7e79cee499"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:54:45.336815 kubelet[1421]: I0317 19:54:45.336063 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-hostproc" (OuterVolumeSpecName: "hostproc") pod "21544e81-eda6-424c-969b-1c7e79cee499" (UID: "21544e81-eda6-424c-969b-1c7e79cee499"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:54:45.336815 kubelet[1421]: I0317 19:54:45.336132 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "21544e81-eda6-424c-969b-1c7e79cee499" (UID: "21544e81-eda6-424c-969b-1c7e79cee499"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:54:45.336815 kubelet[1421]: I0317 19:54:45.336538 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "21544e81-eda6-424c-969b-1c7e79cee499" (UID: "21544e81-eda6-424c-969b-1c7e79cee499"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:54:45.336815 kubelet[1421]: I0317 19:54:45.336592 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "21544e81-eda6-424c-969b-1c7e79cee499" (UID: "21544e81-eda6-424c-969b-1c7e79cee499"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:54:45.336815 kubelet[1421]: I0317 19:54:45.336626 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-cni-path" (OuterVolumeSpecName: "cni-path") pod "21544e81-eda6-424c-969b-1c7e79cee499" (UID: "21544e81-eda6-424c-969b-1c7e79cee499"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:54:45.337196 kubelet[1421]: I0317 19:54:45.336659 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "21544e81-eda6-424c-969b-1c7e79cee499" (UID: "21544e81-eda6-424c-969b-1c7e79cee499"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:54:45.337196 kubelet[1421]: I0317 19:54:45.336695 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "21544e81-eda6-424c-969b-1c7e79cee499" (UID: "21544e81-eda6-424c-969b-1c7e79cee499"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:54:45.337196 kubelet[1421]: I0317 19:54:45.336728 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "21544e81-eda6-424c-969b-1c7e79cee499" (UID: "21544e81-eda6-424c-969b-1c7e79cee499"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:54:45.338047 kubelet[1421]: I0317 19:54:45.337970 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "21544e81-eda6-424c-969b-1c7e79cee499" (UID: "21544e81-eda6-424c-969b-1c7e79cee499"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:54:45.345746 systemd[1]: var-lib-kubelet-pods-21544e81\x2deda6\x2d424c\x2d969b\x2d1c7e79cee499-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 19:54:45.351676 kubelet[1421]: I0317 19:54:45.351615 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21544e81-eda6-424c-969b-1c7e79cee499-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "21544e81-eda6-424c-969b-1c7e79cee499" (UID: "21544e81-eda6-424c-969b-1c7e79cee499"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 19:54:45.352520 kubelet[1421]: I0317 19:54:45.352476 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21544e81-eda6-424c-969b-1c7e79cee499-kube-api-access-cgcgn" (OuterVolumeSpecName: "kube-api-access-cgcgn") pod "21544e81-eda6-424c-969b-1c7e79cee499" (UID: "21544e81-eda6-424c-969b-1c7e79cee499"). InnerVolumeSpecName "kube-api-access-cgcgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 19:54:45.353715 kubelet[1421]: I0317 19:54:45.353670 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21544e81-eda6-424c-969b-1c7e79cee499-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "21544e81-eda6-424c-969b-1c7e79cee499" (UID: "21544e81-eda6-424c-969b-1c7e79cee499"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 19:54:45.363507 kubelet[1421]: I0317 19:54:45.357820 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21544e81-eda6-424c-969b-1c7e79cee499-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "21544e81-eda6-424c-969b-1c7e79cee499" (UID: "21544e81-eda6-424c-969b-1c7e79cee499"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 19:54:45.430193 systemd[1]: var-lib-kubelet-pods-21544e81\x2deda6\x2d424c\x2d969b\x2d1c7e79cee499-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcgcgn.mount: Deactivated successfully. Mar 17 19:54:45.430436 systemd[1]: var-lib-kubelet-pods-21544e81\x2deda6\x2d424c\x2d969b\x2d1c7e79cee499-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 19:54:45.436411 kubelet[1421]: I0317 19:54:45.436232 1421 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/21544e81-eda6-424c-969b-1c7e79cee499-clustermesh-secrets\") on node \"172.24.4.126\" DevicePath \"\"" Mar 17 19:54:45.436411 kubelet[1421]: I0317 19:54:45.436336 1421 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-cgcgn\" (UniqueName: \"kubernetes.io/projected/21544e81-eda6-424c-969b-1c7e79cee499-kube-api-access-cgcgn\") on node \"172.24.4.126\" DevicePath \"\"" Mar 17 19:54:45.436411 kubelet[1421]: I0317 19:54:45.436392 1421 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-lib-modules\") on node \"172.24.4.126\" DevicePath \"\"" Mar 17 19:54:45.436411 kubelet[1421]: I0317 19:54:45.436417 1421 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-host-proc-sys-kernel\") on node \"172.24.4.126\" DevicePath \"\"" Mar 17 19:54:45.436766 kubelet[1421]: I0317 19:54:45.436439 1421 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/21544e81-eda6-424c-969b-1c7e79cee499-hubble-tls\") on node \"172.24.4.126\" DevicePath \"\"" Mar 17 19:54:45.436766 kubelet[1421]: I0317 19:54:45.436460 1421 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/21544e81-eda6-424c-969b-1c7e79cee499-cilium-config-path\") on node \"172.24.4.126\" DevicePath \"\"" Mar 17 19:54:45.436766 kubelet[1421]: I0317 19:54:45.436482 1421 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-bpf-maps\") on node \"172.24.4.126\" DevicePath \"\"" Mar 17 19:54:45.436766 kubelet[1421]: I0317 19:54:45.436503 1421 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-cni-path\") on node \"172.24.4.126\" DevicePath \"\"" Mar 17 19:54:45.436766 kubelet[1421]: I0317 19:54:45.436527 1421 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-host-proc-sys-net\") on node \"172.24.4.126\" DevicePath \"\"" Mar 17 19:54:45.436766 kubelet[1421]: I0317 19:54:45.436546 1421 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-hostproc\") on node \"172.24.4.126\" DevicePath \"\"" Mar 17 19:54:45.436766 kubelet[1421]: I0317 19:54:45.436566 1421 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-etc-cni-netd\") on node \"172.24.4.126\" DevicePath \"\"" Mar 17 19:54:45.436766 kubelet[1421]: I0317 19:54:45.436586 1421 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-cilium-cgroup\") on node \"172.24.4.126\" DevicePath \"\"" Mar 17 19:54:45.437254 kubelet[1421]: I0317 19:54:45.436606 1421 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-cilium-run\") on node \"172.24.4.126\" DevicePath \"\"" Mar 17 19:54:45.437254 kubelet[1421]: I0317 19:54:45.436627 1421 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21544e81-eda6-424c-969b-1c7e79cee499-xtables-lock\") on node \"172.24.4.126\" DevicePath \"\"" Mar 17 19:54:45.536142 kubelet[1421]: E0317 19:54:45.536020 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:45.613056 kubelet[1421]: I0317 19:54:45.612987 1421 setters.go:580] "Node became not ready" node="172.24.4.126" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T19:54:45Z","lastTransitionTime":"2025-03-17T19:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 19:54:46.093411 kubelet[1421]: I0317 19:54:46.093333 1421 scope.go:117] "RemoveContainer" containerID="908014fb161c35ecbfb0a3a69d9fcdc5181515adb5956a0d73e64f41d913f16e" Mar 17 19:54:46.102248 systemd[1]: Removed slice kubepods-burstable-pod21544e81_eda6_424c_969b_1c7e79cee499.slice. Mar 17 19:54:46.102528 systemd[1]: kubepods-burstable-pod21544e81_eda6_424c_969b_1c7e79cee499.slice: Consumed 8.754s CPU time. Mar 17 19:54:46.105663 env[1154]: time="2025-03-17T19:54:46.105339833Z" level=info msg="RemoveContainer for \"908014fb161c35ecbfb0a3a69d9fcdc5181515adb5956a0d73e64f41d913f16e\"" Mar 17 19:54:46.111086 env[1154]: time="2025-03-17T19:54:46.111008012Z" level=info msg="RemoveContainer for \"908014fb161c35ecbfb0a3a69d9fcdc5181515adb5956a0d73e64f41d913f16e\" returns successfully" Mar 17 19:54:46.111517 kubelet[1421]: I0317 19:54:46.111459 1421 scope.go:117] "RemoveContainer" containerID="587229cdc0fbb0f41f9747d7ee032bc40e8db913a3a9e3f22860f098080eaf2c" Mar 17 19:54:46.113551 env[1154]: time="2025-03-17T19:54:46.113479168Z" level=info msg="RemoveContainer for \"587229cdc0fbb0f41f9747d7ee032bc40e8db913a3a9e3f22860f098080eaf2c\"" Mar 17 19:54:46.118829 env[1154]: time="2025-03-17T19:54:46.118706141Z" level=info msg="RemoveContainer for \"587229cdc0fbb0f41f9747d7ee032bc40e8db913a3a9e3f22860f098080eaf2c\" returns successfully" Mar 17 19:54:46.119125 kubelet[1421]: I0317 19:54:46.119081 1421 scope.go:117] "RemoveContainer" containerID="26f359f96f2a750e22345a45f08b83a7f45068e47f101ade9ec18ec9f3c1642d" Mar 17 19:54:46.121714 env[1154]: time="2025-03-17T19:54:46.121657678Z" level=info msg="RemoveContainer for \"26f359f96f2a750e22345a45f08b83a7f45068e47f101ade9ec18ec9f3c1642d\"" Mar 17 19:54:46.126905 env[1154]: time="2025-03-17T19:54:46.126830149Z" level=info msg="RemoveContainer for \"26f359f96f2a750e22345a45f08b83a7f45068e47f101ade9ec18ec9f3c1642d\" returns successfully" Mar 17 19:54:46.127260 kubelet[1421]: I0317 19:54:46.127166 1421 scope.go:117] "RemoveContainer" containerID="1d8e4146bd593c8c8f96faa736e8fd0a2b1192b61df11d7141127dbb4a6e513c" Mar 17 19:54:46.129720 env[1154]: time="2025-03-17T19:54:46.129641547Z" level=info msg="RemoveContainer for \"1d8e4146bd593c8c8f96faa736e8fd0a2b1192b61df11d7141127dbb4a6e513c\"" Mar 17 19:54:46.137728 env[1154]: time="2025-03-17T19:54:46.137660229Z" level=info msg="RemoveContainer for \"1d8e4146bd593c8c8f96faa736e8fd0a2b1192b61df11d7141127dbb4a6e513c\" returns successfully" Mar 17 19:54:46.138021 kubelet[1421]: I0317 19:54:46.137984 1421 scope.go:117] "RemoveContainer" containerID="6889ced94f56209cbdf79084302cfa75edf5fd70eba179e1d1953f283ca313da" Mar 17 19:54:46.139987 env[1154]: time="2025-03-17T19:54:46.139937487Z" level=info msg="RemoveContainer for \"6889ced94f56209cbdf79084302cfa75edf5fd70eba179e1d1953f283ca313da\"" Mar 17 19:54:46.145054 env[1154]: time="2025-03-17T19:54:46.144942669Z" level=info msg="RemoveContainer for \"6889ced94f56209cbdf79084302cfa75edf5fd70eba179e1d1953f283ca313da\" returns successfully" Mar 17 19:54:46.145249 kubelet[1421]: I0317 19:54:46.145206 1421 scope.go:117] "RemoveContainer" containerID="908014fb161c35ecbfb0a3a69d9fcdc5181515adb5956a0d73e64f41d913f16e" Mar 17 19:54:46.145776 env[1154]: time="2025-03-17T19:54:46.145575871Z" level=error msg="ContainerStatus for \"908014fb161c35ecbfb0a3a69d9fcdc5181515adb5956a0d73e64f41d913f16e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"908014fb161c35ecbfb0a3a69d9fcdc5181515adb5956a0d73e64f41d913f16e\": not found" Mar 17 19:54:46.146041 kubelet[1421]: E0317 19:54:46.145992 1421 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"908014fb161c35ecbfb0a3a69d9fcdc5181515adb5956a0d73e64f41d913f16e\": not found" containerID="908014fb161c35ecbfb0a3a69d9fcdc5181515adb5956a0d73e64f41d913f16e" Mar 17 19:54:46.146204 kubelet[1421]: I0317 19:54:46.146052 1421 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"908014fb161c35ecbfb0a3a69d9fcdc5181515adb5956a0d73e64f41d913f16e"} err="failed to get container status \"908014fb161c35ecbfb0a3a69d9fcdc5181515adb5956a0d73e64f41d913f16e\": rpc error: code = NotFound desc = an error occurred when try to find container \"908014fb161c35ecbfb0a3a69d9fcdc5181515adb5956a0d73e64f41d913f16e\": not found" Mar 17 19:54:46.146204 kubelet[1421]: I0317 19:54:46.146202 1421 scope.go:117] "RemoveContainer" containerID="587229cdc0fbb0f41f9747d7ee032bc40e8db913a3a9e3f22860f098080eaf2c" Mar 17 19:54:46.146844 env[1154]: time="2025-03-17T19:54:46.146733435Z" level=error msg="ContainerStatus for \"587229cdc0fbb0f41f9747d7ee032bc40e8db913a3a9e3f22860f098080eaf2c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"587229cdc0fbb0f41f9747d7ee032bc40e8db913a3a9e3f22860f098080eaf2c\": not found" Mar 17 19:54:46.147297 kubelet[1421]: E0317 19:54:46.147251 1421 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"587229cdc0fbb0f41f9747d7ee032bc40e8db913a3a9e3f22860f098080eaf2c\": not found" containerID="587229cdc0fbb0f41f9747d7ee032bc40e8db913a3a9e3f22860f098080eaf2c" Mar 17 19:54:46.147507 kubelet[1421]: I0317 19:54:46.147307 1421 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"587229cdc0fbb0f41f9747d7ee032bc40e8db913a3a9e3f22860f098080eaf2c"} err="failed to get container status \"587229cdc0fbb0f41f9747d7ee032bc40e8db913a3a9e3f22860f098080eaf2c\": rpc error: code = NotFound desc = an error occurred when try to find container \"587229cdc0fbb0f41f9747d7ee032bc40e8db913a3a9e3f22860f098080eaf2c\": not found" Mar 17 19:54:46.147507 kubelet[1421]: I0317 19:54:46.147343 1421 scope.go:117] "RemoveContainer" containerID="26f359f96f2a750e22345a45f08b83a7f45068e47f101ade9ec18ec9f3c1642d" Mar 17 19:54:46.147853 env[1154]: time="2025-03-17T19:54:46.147723179Z" level=error msg="ContainerStatus for \"26f359f96f2a750e22345a45f08b83a7f45068e47f101ade9ec18ec9f3c1642d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"26f359f96f2a750e22345a45f08b83a7f45068e47f101ade9ec18ec9f3c1642d\": not found" Mar 17 19:54:46.148075 kubelet[1421]: E0317 19:54:46.148025 1421 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"26f359f96f2a750e22345a45f08b83a7f45068e47f101ade9ec18ec9f3c1642d\": not found" containerID="26f359f96f2a750e22345a45f08b83a7f45068e47f101ade9ec18ec9f3c1642d" Mar 17 19:54:46.148225 kubelet[1421]: I0317 19:54:46.148078 1421 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"26f359f96f2a750e22345a45f08b83a7f45068e47f101ade9ec18ec9f3c1642d"} err="failed to get container status \"26f359f96f2a750e22345a45f08b83a7f45068e47f101ade9ec18ec9f3c1642d\": rpc error: code = NotFound desc = an error occurred when try to find container \"26f359f96f2a750e22345a45f08b83a7f45068e47f101ade9ec18ec9f3c1642d\": not found" Mar 17 19:54:46.148225 kubelet[1421]: I0317 19:54:46.148113 1421 scope.go:117] "RemoveContainer" containerID="1d8e4146bd593c8c8f96faa736e8fd0a2b1192b61df11d7141127dbb4a6e513c" Mar 17 19:54:46.148727 env[1154]: time="2025-03-17T19:54:46.148625650Z" level=error msg="ContainerStatus for \"1d8e4146bd593c8c8f96faa736e8fd0a2b1192b61df11d7141127dbb4a6e513c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d8e4146bd593c8c8f96faa736e8fd0a2b1192b61df11d7141127dbb4a6e513c\": not found" Mar 17 19:54:46.149188 kubelet[1421]: E0317 19:54:46.149132 1421 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d8e4146bd593c8c8f96faa736e8fd0a2b1192b61df11d7141127dbb4a6e513c\": not found" containerID="1d8e4146bd593c8c8f96faa736e8fd0a2b1192b61df11d7141127dbb4a6e513c" Mar 17 19:54:46.149312 kubelet[1421]: I0317 19:54:46.149192 1421 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1d8e4146bd593c8c8f96faa736e8fd0a2b1192b61df11d7141127dbb4a6e513c"} err="failed to get container status \"1d8e4146bd593c8c8f96faa736e8fd0a2b1192b61df11d7141127dbb4a6e513c\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d8e4146bd593c8c8f96faa736e8fd0a2b1192b61df11d7141127dbb4a6e513c\": not found" Mar 17 19:54:46.149312 kubelet[1421]: I0317 19:54:46.149226 1421 scope.go:117] "RemoveContainer" containerID="6889ced94f56209cbdf79084302cfa75edf5fd70eba179e1d1953f283ca313da" Mar 17 19:54:46.149709 env[1154]: time="2025-03-17T19:54:46.149575890Z" level=error msg="ContainerStatus for \"6889ced94f56209cbdf79084302cfa75edf5fd70eba179e1d1953f283ca313da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6889ced94f56209cbdf79084302cfa75edf5fd70eba179e1d1953f283ca313da\": not found" Mar 17 19:54:46.150097 kubelet[1421]: E0317 19:54:46.150020 1421 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6889ced94f56209cbdf79084302cfa75edf5fd70eba179e1d1953f283ca313da\": not found" containerID="6889ced94f56209cbdf79084302cfa75edf5fd70eba179e1d1953f283ca313da" Mar 17 19:54:46.150507 kubelet[1421]: I0317 19:54:46.150422 1421 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6889ced94f56209cbdf79084302cfa75edf5fd70eba179e1d1953f283ca313da"} err="failed to get container status \"6889ced94f56209cbdf79084302cfa75edf5fd70eba179e1d1953f283ca313da\": rpc error: code = NotFound desc = an error occurred when try to find container \"6889ced94f56209cbdf79084302cfa75edf5fd70eba179e1d1953f283ca313da\": not found" Mar 17 19:54:46.537199 kubelet[1421]: E0317 19:54:46.537123 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:46.667157 kubelet[1421]: I0317 19:54:46.667097 1421 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21544e81-eda6-424c-969b-1c7e79cee499" path="/var/lib/kubelet/pods/21544e81-eda6-424c-969b-1c7e79cee499/volumes" Mar 17 19:54:47.538328 kubelet[1421]: E0317 19:54:47.538262 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:48.539889 kubelet[1421]: E0317 19:54:48.539826 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:49.024778 kubelet[1421]: I0317 19:54:49.024716 1421 topology_manager.go:215] "Topology Admit Handler" podUID="e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584" podNamespace="kube-system" podName="cilium-89jsq" Mar 17 19:54:49.025038 kubelet[1421]: E0317 19:54:49.024819 1421 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="21544e81-eda6-424c-969b-1c7e79cee499" containerName="mount-bpf-fs" Mar 17 19:54:49.025038 kubelet[1421]: E0317 19:54:49.024842 1421 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="21544e81-eda6-424c-969b-1c7e79cee499" containerName="cilium-agent" Mar 17 19:54:49.025038 kubelet[1421]: E0317 19:54:49.024859 1421 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="21544e81-eda6-424c-969b-1c7e79cee499" containerName="mount-cgroup" Mar 17 19:54:49.025038 kubelet[1421]: E0317 19:54:49.024873 1421 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="21544e81-eda6-424c-969b-1c7e79cee499" containerName="apply-sysctl-overwrites" Mar 17 19:54:49.025038 kubelet[1421]: E0317 19:54:49.024887 1421 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="21544e81-eda6-424c-969b-1c7e79cee499" containerName="clean-cilium-state" Mar 17 19:54:49.025038 kubelet[1421]: I0317 19:54:49.024926 1421 memory_manager.go:354] "RemoveStaleState removing state" podUID="21544e81-eda6-424c-969b-1c7e79cee499" containerName="cilium-agent" Mar 17 19:54:49.025971 kubelet[1421]: I0317 19:54:49.025897 1421 topology_manager.go:215] "Topology Admit Handler" podUID="1d1834f8-0aab-4d4b-abcc-8982ad0c1b0d" podNamespace="kube-system" podName="cilium-operator-599987898-n5gcw" Mar 17 19:54:49.038763 systemd[1]: Created slice kubepods-burstable-pode6ad1eb3_bc22_4ce9_8f97_065d4f8eb584.slice. Mar 17 19:54:49.055319 systemd[1]: Created slice kubepods-besteffort-pod1d1834f8_0aab_4d4b_abcc_8982ad0c1b0d.slice. Mar 17 19:54:49.161311 kubelet[1421]: I0317 19:54:49.161072 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rflgb\" (UniqueName: \"kubernetes.io/projected/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-kube-api-access-rflgb\") pod \"cilium-89jsq\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " pod="kube-system/cilium-89jsq" Mar 17 19:54:49.161311 kubelet[1421]: I0317 19:54:49.161187 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-cni-path\") pod \"cilium-89jsq\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " pod="kube-system/cilium-89jsq" Mar 17 19:54:49.161311 kubelet[1421]: I0317 19:54:49.161277 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-lib-modules\") pod \"cilium-89jsq\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " pod="kube-system/cilium-89jsq" Mar 17 19:54:49.161732 kubelet[1421]: I0317 19:54:49.161397 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-xtables-lock\") pod \"cilium-89jsq\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " pod="kube-system/cilium-89jsq" Mar 17 19:54:49.161732 kubelet[1421]: I0317 19:54:49.161506 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-clustermesh-secrets\") pod \"cilium-89jsq\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " pod="kube-system/cilium-89jsq" Mar 17 19:54:49.161732 kubelet[1421]: I0317 19:54:49.161620 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-bpf-maps\") pod \"cilium-89jsq\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " pod="kube-system/cilium-89jsq" Mar 17 19:54:49.161732 kubelet[1421]: I0317 19:54:49.161703 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-hostproc\") pod \"cilium-89jsq\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " pod="kube-system/cilium-89jsq" Mar 17 19:54:49.161996 kubelet[1421]: I0317 19:54:49.161890 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-cilium-cgroup\") pod \"cilium-89jsq\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " pod="kube-system/cilium-89jsq" Mar 17 19:54:49.162167 kubelet[1421]: I0317 19:54:49.161943 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-etc-cni-netd\") pod \"cilium-89jsq\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " pod="kube-system/cilium-89jsq" Mar 17 19:54:49.162266 kubelet[1421]: I0317 19:54:49.162203 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-cilium-run\") pod \"cilium-89jsq\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " pod="kube-system/cilium-89jsq" Mar 17 19:54:49.162342 kubelet[1421]: I0317 19:54:49.162248 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-hubble-tls\") pod \"cilium-89jsq\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " pod="kube-system/cilium-89jsq" Mar 17 19:54:49.162500 kubelet[1421]: I0317 19:54:49.162336 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb6d9\" (UniqueName: \"kubernetes.io/projected/1d1834f8-0aab-4d4b-abcc-8982ad0c1b0d-kube-api-access-mb6d9\") pod \"cilium-operator-599987898-n5gcw\" (UID: \"1d1834f8-0aab-4d4b-abcc-8982ad0c1b0d\") " pod="kube-system/cilium-operator-599987898-n5gcw" Mar 17 19:54:49.162500 kubelet[1421]: I0317 19:54:49.162446 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-host-proc-sys-kernel\") pod \"cilium-89jsq\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " pod="kube-system/cilium-89jsq" Mar 17 19:54:49.162645 kubelet[1421]: I0317 19:54:49.162528 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-cilium-ipsec-secrets\") pod \"cilium-89jsq\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " pod="kube-system/cilium-89jsq" Mar 17 19:54:49.162645 kubelet[1421]: I0317 19:54:49.162632 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-host-proc-sys-net\") pod \"cilium-89jsq\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " pod="kube-system/cilium-89jsq" Mar 17 19:54:49.162779 kubelet[1421]: I0317 19:54:49.162711 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d1834f8-0aab-4d4b-abcc-8982ad0c1b0d-cilium-config-path\") pod \"cilium-operator-599987898-n5gcw\" (UID: \"1d1834f8-0aab-4d4b-abcc-8982ad0c1b0d\") " pod="kube-system/cilium-operator-599987898-n5gcw" Mar 17 19:54:49.162848 kubelet[1421]: I0317 19:54:49.162792 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-cilium-config-path\") pod \"cilium-89jsq\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " pod="kube-system/cilium-89jsq" Mar 17 19:54:49.350739 env[1154]: time="2025-03-17T19:54:49.350642501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-89jsq,Uid:e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584,Namespace:kube-system,Attempt:0,}" Mar 17 19:54:49.363299 env[1154]: time="2025-03-17T19:54:49.363267788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-n5gcw,Uid:1d1834f8-0aab-4d4b-abcc-8982ad0c1b0d,Namespace:kube-system,Attempt:0,}" Mar 17 19:54:49.366422 env[1154]: time="2025-03-17T19:54:49.366347778Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 19:54:49.366614 env[1154]: time="2025-03-17T19:54:49.366590048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 19:54:49.366723 env[1154]: time="2025-03-17T19:54:49.366701003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 19:54:49.367460 env[1154]: time="2025-03-17T19:54:49.367421900Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a63f5f5da85646f3dce47b11fcf805b5fb5503900f91ff502070670de58c2685 pid=2939 runtime=io.containerd.runc.v2 Mar 17 19:54:49.381488 systemd[1]: Started cri-containerd-a63f5f5da85646f3dce47b11fcf805b5fb5503900f91ff502070670de58c2685.scope. Mar 17 19:54:49.396669 env[1154]: time="2025-03-17T19:54:49.395602816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 19:54:49.396669 env[1154]: time="2025-03-17T19:54:49.395643602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 19:54:49.396669 env[1154]: time="2025-03-17T19:54:49.395657478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 19:54:49.397242 env[1154]: time="2025-03-17T19:54:49.396563126Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4697efbd0204352d033346913dcaa5eb61c0384608502ea05a1b1e5ba57213ab pid=2968 runtime=io.containerd.runc.v2 Mar 17 19:54:49.408381 env[1154]: time="2025-03-17T19:54:49.408266655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-89jsq,Uid:e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584,Namespace:kube-system,Attempt:0,} returns sandbox id \"a63f5f5da85646f3dce47b11fcf805b5fb5503900f91ff502070670de58c2685\"" Mar 17 19:54:49.411277 env[1154]: time="2025-03-17T19:54:49.411241750Z" level=info msg="CreateContainer within sandbox \"a63f5f5da85646f3dce47b11fcf805b5fb5503900f91ff502070670de58c2685\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 19:54:49.415918 systemd[1]: Started cri-containerd-4697efbd0204352d033346913dcaa5eb61c0384608502ea05a1b1e5ba57213ab.scope. Mar 17 19:54:49.431443 env[1154]: time="2025-03-17T19:54:49.431345813Z" level=info msg="CreateContainer within sandbox \"a63f5f5da85646f3dce47b11fcf805b5fb5503900f91ff502070670de58c2685\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"090a5ff2618941c18fe48e52f445bc501eada85173fc214d60ace4d8546a1df6\"" Mar 17 19:54:49.433033 env[1154]: time="2025-03-17T19:54:49.431913484Z" level=info msg="StartContainer for \"090a5ff2618941c18fe48e52f445bc501eada85173fc214d60ace4d8546a1df6\"" Mar 17 19:54:49.448188 systemd[1]: Started cri-containerd-090a5ff2618941c18fe48e52f445bc501eada85173fc214d60ace4d8546a1df6.scope. Mar 17 19:54:49.459331 systemd[1]: cri-containerd-090a5ff2618941c18fe48e52f445bc501eada85173fc214d60ace4d8546a1df6.scope: Deactivated successfully. Mar 17 19:54:49.480199 env[1154]: time="2025-03-17T19:54:49.480153720Z" level=info msg="shim disconnected" id=090a5ff2618941c18fe48e52f445bc501eada85173fc214d60ace4d8546a1df6 Mar 17 19:54:49.480421 env[1154]: time="2025-03-17T19:54:49.480401870Z" level=warning msg="cleaning up after shim disconnected" id=090a5ff2618941c18fe48e52f445bc501eada85173fc214d60ace4d8546a1df6 namespace=k8s.io Mar 17 19:54:49.480492 env[1154]: time="2025-03-17T19:54:49.480477771Z" level=info msg="cleaning up dead shim" Mar 17 19:54:49.483480 env[1154]: time="2025-03-17T19:54:49.483441575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-n5gcw,Uid:1d1834f8-0aab-4d4b-abcc-8982ad0c1b0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"4697efbd0204352d033346913dcaa5eb61c0384608502ea05a1b1e5ba57213ab\"" Mar 17 19:54:49.485263 env[1154]: time="2025-03-17T19:54:49.485231655Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 19:54:49.492392 env[1154]: time="2025-03-17T19:54:49.492320587Z" level=warning msg="cleanup warnings time=\"2025-03-17T19:54:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3041 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T19:54:49Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/090a5ff2618941c18fe48e52f445bc501eada85173fc214d60ace4d8546a1df6/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Mar 17 19:54:49.493057 env[1154]: time="2025-03-17T19:54:49.492706563Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Mar 17 19:54:49.493301 env[1154]: time="2025-03-17T19:54:49.493246805Z" level=error msg="Failed to pipe stdout of container \"090a5ff2618941c18fe48e52f445bc501eada85173fc214d60ace4d8546a1df6\"" error="reading from a closed fifo" Mar 17 19:54:49.493444 env[1154]: time="2025-03-17T19:54:49.493406119Z" level=error msg="Failed to pipe stderr of container \"090a5ff2618941c18fe48e52f445bc501eada85173fc214d60ace4d8546a1df6\"" error="reading from a closed fifo" Mar 17 19:54:49.496516 env[1154]: time="2025-03-17T19:54:49.496445906Z" level=error msg="StartContainer for \"090a5ff2618941c18fe48e52f445bc501eada85173fc214d60ace4d8546a1df6\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Mar 17 19:54:49.497077 kubelet[1421]: E0317 19:54:49.496687 1421 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="090a5ff2618941c18fe48e52f445bc501eada85173fc214d60ace4d8546a1df6" Mar 17 19:54:49.497077 kubelet[1421]: E0317 19:54:49.496845 1421 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Mar 17 19:54:49.497077 kubelet[1421]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Mar 17 19:54:49.497077 kubelet[1421]: rm /hostbin/cilium-mount Mar 17 19:54:49.497253 kubelet[1421]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rflgb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-89jsq_kube-system(e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Mar 17 19:54:49.497396 kubelet[1421]: E0317 19:54:49.496876 1421 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-89jsq" podUID="e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584" Mar 17 19:54:49.540776 kubelet[1421]: E0317 19:54:49.540725 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:49.604250 kubelet[1421]: E0317 19:54:49.604047 1421 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 19:54:50.126994 env[1154]: time="2025-03-17T19:54:50.126899599Z" level=info msg="CreateContainer within sandbox \"a63f5f5da85646f3dce47b11fcf805b5fb5503900f91ff502070670de58c2685\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Mar 17 19:54:50.171291 env[1154]: time="2025-03-17T19:54:50.171175267Z" level=info msg="CreateContainer within sandbox \"a63f5f5da85646f3dce47b11fcf805b5fb5503900f91ff502070670de58c2685\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"2f35869db4d2d9b5ceec05c03e3b645eedac95cf30145e7cf63f84ef6ce3b1c9\"" Mar 17 19:54:50.172880 env[1154]: time="2025-03-17T19:54:50.172619897Z" level=info msg="StartContainer for \"2f35869db4d2d9b5ceec05c03e3b645eedac95cf30145e7cf63f84ef6ce3b1c9\"" Mar 17 19:54:50.206439 systemd[1]: Started cri-containerd-2f35869db4d2d9b5ceec05c03e3b645eedac95cf30145e7cf63f84ef6ce3b1c9.scope. Mar 17 19:54:50.225770 systemd[1]: cri-containerd-2f35869db4d2d9b5ceec05c03e3b645eedac95cf30145e7cf63f84ef6ce3b1c9.scope: Deactivated successfully. Mar 17 19:54:50.243443 env[1154]: time="2025-03-17T19:54:50.243394287Z" level=info msg="shim disconnected" id=2f35869db4d2d9b5ceec05c03e3b645eedac95cf30145e7cf63f84ef6ce3b1c9 Mar 17 19:54:50.243695 env[1154]: time="2025-03-17T19:54:50.243638119Z" level=warning msg="cleaning up after shim disconnected" id=2f35869db4d2d9b5ceec05c03e3b645eedac95cf30145e7cf63f84ef6ce3b1c9 namespace=k8s.io Mar 17 19:54:50.243802 env[1154]: time="2025-03-17T19:54:50.243785162Z" level=info msg="cleaning up dead shim" Mar 17 19:54:50.255974 env[1154]: time="2025-03-17T19:54:50.255891263Z" level=warning msg="cleanup warnings time=\"2025-03-17T19:54:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3080 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T19:54:50Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/2f35869db4d2d9b5ceec05c03e3b645eedac95cf30145e7cf63f84ef6ce3b1c9/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Mar 17 19:54:50.256526 env[1154]: time="2025-03-17T19:54:50.256345245Z" level=error msg="copy shim log" error="read /proc/self/fd/72: file already closed" Mar 17 19:54:50.258546 env[1154]: time="2025-03-17T19:54:50.258504369Z" level=error msg="Failed to pipe stderr of container \"2f35869db4d2d9b5ceec05c03e3b645eedac95cf30145e7cf63f84ef6ce3b1c9\"" error="reading from a closed fifo" Mar 17 19:54:50.259479 env[1154]: time="2025-03-17T19:54:50.259448970Z" level=error msg="Failed to pipe stdout of container \"2f35869db4d2d9b5ceec05c03e3b645eedac95cf30145e7cf63f84ef6ce3b1c9\"" error="reading from a closed fifo" Mar 17 19:54:50.264214 env[1154]: time="2025-03-17T19:54:50.264088765Z" level=error msg="StartContainer for \"2f35869db4d2d9b5ceec05c03e3b645eedac95cf30145e7cf63f84ef6ce3b1c9\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Mar 17 19:54:50.265147 kubelet[1421]: E0317 19:54:50.264581 1421 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="2f35869db4d2d9b5ceec05c03e3b645eedac95cf30145e7cf63f84ef6ce3b1c9" Mar 17 19:54:50.265147 kubelet[1421]: E0317 19:54:50.265069 1421 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Mar 17 19:54:50.265147 kubelet[1421]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Mar 17 19:54:50.265147 kubelet[1421]: rm /hostbin/cilium-mount Mar 17 19:54:50.265320 kubelet[1421]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rflgb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-89jsq_kube-system(e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Mar 17 19:54:50.265460 kubelet[1421]: E0317 19:54:50.265100 1421 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-89jsq" podUID="e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584" Mar 17 19:54:50.541293 kubelet[1421]: E0317 19:54:50.541182 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:51.055245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2610848651.mount: Deactivated successfully. Mar 17 19:54:51.116885 kubelet[1421]: I0317 19:54:51.116836 1421 scope.go:117] "RemoveContainer" containerID="090a5ff2618941c18fe48e52f445bc501eada85173fc214d60ace4d8546a1df6" Mar 17 19:54:51.117692 kubelet[1421]: I0317 19:54:51.117660 1421 scope.go:117] "RemoveContainer" containerID="090a5ff2618941c18fe48e52f445bc501eada85173fc214d60ace4d8546a1df6" Mar 17 19:54:51.120936 env[1154]: time="2025-03-17T19:54:51.120878858Z" level=info msg="RemoveContainer for \"090a5ff2618941c18fe48e52f445bc501eada85173fc214d60ace4d8546a1df6\"" Mar 17 19:54:51.125033 env[1154]: time="2025-03-17T19:54:51.124976309Z" level=info msg="RemoveContainer for \"090a5ff2618941c18fe48e52f445bc501eada85173fc214d60ace4d8546a1df6\"" Mar 17 19:54:51.125183 env[1154]: time="2025-03-17T19:54:51.125115086Z" level=error msg="RemoveContainer for \"090a5ff2618941c18fe48e52f445bc501eada85173fc214d60ace4d8546a1df6\" failed" error="rpc error: code = NotFound desc = get container info: container \"090a5ff2618941c18fe48e52f445bc501eada85173fc214d60ace4d8546a1df6\" in namespace \"k8s.io\": not found" Mar 17 19:54:51.127183 env[1154]: time="2025-03-17T19:54:51.127130705Z" level=info msg="RemoveContainer for \"090a5ff2618941c18fe48e52f445bc501eada85173fc214d60ace4d8546a1df6\" returns successfully" Mar 17 19:54:51.127584 kubelet[1421]: E0317 19:54:51.127533 1421 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = NotFound desc = get container info: container \"090a5ff2618941c18fe48e52f445bc501eada85173fc214d60ace4d8546a1df6\" in namespace \"k8s.io\": not found" containerID="090a5ff2618941c18fe48e52f445bc501eada85173fc214d60ace4d8546a1df6" Mar 17 19:54:51.127666 kubelet[1421]: I0317 19:54:51.127608 1421 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"090a5ff2618941c18fe48e52f445bc501eada85173fc214d60ace4d8546a1df6"} err="rpc error: code = NotFound desc = get container info: container \"090a5ff2618941c18fe48e52f445bc501eada85173fc214d60ace4d8546a1df6\" in namespace \"k8s.io\": not found" Mar 17 19:54:51.128772 kubelet[1421]: E0317 19:54:51.128688 1421 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-89jsq_kube-system(e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584)\"" pod="kube-system/cilium-89jsq" podUID="e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584" Mar 17 19:54:51.542168 kubelet[1421]: E0317 19:54:51.542090 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:52.024566 env[1154]: time="2025-03-17T19:54:52.024492165Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:54:52.027079 env[1154]: time="2025-03-17T19:54:52.027026157Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:54:52.030981 env[1154]: time="2025-03-17T19:54:52.030919370Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 17 19:54:52.031172 env[1154]: time="2025-03-17T19:54:52.029555921Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 19:54:52.035839 env[1154]: time="2025-03-17T19:54:52.035766204Z" level=info msg="CreateContainer within sandbox \"4697efbd0204352d033346913dcaa5eb61c0384608502ea05a1b1e5ba57213ab\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 19:54:52.059011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3392970773.mount: Deactivated successfully. Mar 17 19:54:52.066195 env[1154]: time="2025-03-17T19:54:52.065788841Z" level=info msg="CreateContainer within sandbox \"4697efbd0204352d033346913dcaa5eb61c0384608502ea05a1b1e5ba57213ab\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7b9d13acc3eb83893bee0b29994a28f6f149575d7c857dd1b635c8e321739e1e\"" Mar 17 19:54:52.066907 env[1154]: time="2025-03-17T19:54:52.066874786Z" level=info msg="StartContainer for \"7b9d13acc3eb83893bee0b29994a28f6f149575d7c857dd1b635c8e321739e1e\"" Mar 17 19:54:52.095323 systemd[1]: Started cri-containerd-7b9d13acc3eb83893bee0b29994a28f6f149575d7c857dd1b635c8e321739e1e.scope. Mar 17 19:54:52.121283 env[1154]: time="2025-03-17T19:54:52.121242825Z" level=info msg="StopPodSandbox for \"a63f5f5da85646f3dce47b11fcf805b5fb5503900f91ff502070670de58c2685\"" Mar 17 19:54:52.121661 env[1154]: time="2025-03-17T19:54:52.121304440Z" level=info msg="Container to stop \"2f35869db4d2d9b5ceec05c03e3b645eedac95cf30145e7cf63f84ef6ce3b1c9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 19:54:52.133085 systemd[1]: cri-containerd-a63f5f5da85646f3dce47b11fcf805b5fb5503900f91ff502070670de58c2685.scope: Deactivated successfully. Mar 17 19:54:52.151489 env[1154]: time="2025-03-17T19:54:52.151436099Z" level=info msg="StartContainer for \"7b9d13acc3eb83893bee0b29994a28f6f149575d7c857dd1b635c8e321739e1e\" returns successfully" Mar 17 19:54:52.414815 env[1154]: time="2025-03-17T19:54:52.414321081Z" level=info msg="shim disconnected" id=a63f5f5da85646f3dce47b11fcf805b5fb5503900f91ff502070670de58c2685 Mar 17 19:54:52.414815 env[1154]: time="2025-03-17T19:54:52.414714161Z" level=warning msg="cleaning up after shim disconnected" id=a63f5f5da85646f3dce47b11fcf805b5fb5503900f91ff502070670de58c2685 namespace=k8s.io Mar 17 19:54:52.414815 env[1154]: time="2025-03-17T19:54:52.414742233Z" level=info msg="cleaning up dead shim" Mar 17 19:54:52.435652 env[1154]: time="2025-03-17T19:54:52.435552266Z" level=warning msg="cleanup warnings time=\"2025-03-17T19:54:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3150 runtime=io.containerd.runc.v2\n" Mar 17 19:54:52.436596 env[1154]: time="2025-03-17T19:54:52.436534910Z" level=info msg="TearDown network for sandbox \"a63f5f5da85646f3dce47b11fcf805b5fb5503900f91ff502070670de58c2685\" successfully" Mar 17 19:54:52.436850 env[1154]: time="2025-03-17T19:54:52.436773833Z" level=info msg="StopPodSandbox for \"a63f5f5da85646f3dce47b11fcf805b5fb5503900f91ff502070670de58c2685\" returns successfully" Mar 17 19:54:52.542262 kubelet[1421]: E0317 19:54:52.542228 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:52.586832 kubelet[1421]: W0317 19:54:52.586735 1421 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6ad1eb3_bc22_4ce9_8f97_065d4f8eb584.slice/cri-containerd-090a5ff2618941c18fe48e52f445bc501eada85173fc214d60ace4d8546a1df6.scope WatchSource:0}: container "090a5ff2618941c18fe48e52f445bc501eada85173fc214d60ace4d8546a1df6" in namespace "k8s.io": not found Mar 17 19:54:52.593408 kubelet[1421]: I0317 19:54:52.593338 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-clustermesh-secrets\") pod \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " Mar 17 19:54:52.594174 kubelet[1421]: I0317 19:54:52.594138 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-host-proc-sys-net\") pod \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " Mar 17 19:54:52.594471 kubelet[1421]: I0317 19:54:52.594432 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rflgb\" (UniqueName: \"kubernetes.io/projected/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-kube-api-access-rflgb\") pod \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " Mar 17 19:54:52.594725 kubelet[1421]: I0317 19:54:52.594694 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-cni-path\") pod \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " Mar 17 19:54:52.594921 kubelet[1421]: I0317 19:54:52.594892 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-lib-modules\") pod \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " Mar 17 19:54:52.595127 kubelet[1421]: I0317 19:54:52.595096 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-cilium-ipsec-secrets\") pod \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " Mar 17 19:54:52.595313 kubelet[1421]: I0317 19:54:52.595283 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-bpf-maps\") pod \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " Mar 17 19:54:52.595552 kubelet[1421]: I0317 19:54:52.595521 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-hostproc\") pod \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " Mar 17 19:54:52.595782 kubelet[1421]: I0317 19:54:52.595750 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-hubble-tls\") pod \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " Mar 17 19:54:52.596019 kubelet[1421]: I0317 19:54:52.595987 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-xtables-lock\") pod \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " Mar 17 19:54:52.596257 kubelet[1421]: I0317 19:54:52.596187 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-cilium-cgroup\") pod \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " Mar 17 19:54:52.596513 kubelet[1421]: I0317 19:54:52.596478 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-cilium-config-path\") pod \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " Mar 17 19:54:52.596720 kubelet[1421]: I0317 19:54:52.596688 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-etc-cni-netd\") pod \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " Mar 17 19:54:52.596907 kubelet[1421]: I0317 19:54:52.596877 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-cilium-run\") pod \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " Mar 17 19:54:52.597090 kubelet[1421]: I0317 19:54:52.597061 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-host-proc-sys-kernel\") pod \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\" (UID: \"e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584\") " Mar 17 19:54:52.597338 kubelet[1421]: I0317 19:54:52.597299 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584" (UID: "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:54:52.597603 kubelet[1421]: I0317 19:54:52.597566 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584" (UID: "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:54:52.598889 kubelet[1421]: I0317 19:54:52.598844 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584" (UID: "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 19:54:52.606766 kubelet[1421]: I0317 19:54:52.606719 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584" (UID: "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 19:54:52.607298 kubelet[1421]: I0317 19:54:52.607261 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-cni-path" (OuterVolumeSpecName: "cni-path") pod "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584" (UID: "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:54:52.607870 kubelet[1421]: I0317 19:54:52.607798 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584" (UID: "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:54:52.608990 kubelet[1421]: I0317 19:54:52.608946 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-kube-api-access-rflgb" (OuterVolumeSpecName: "kube-api-access-rflgb") pod "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584" (UID: "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584"). InnerVolumeSpecName "kube-api-access-rflgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 19:54:52.609283 kubelet[1421]: I0317 19:54:52.609245 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584" (UID: "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:54:52.609550 kubelet[1421]: I0317 19:54:52.609512 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584" (UID: "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:54:52.615460 kubelet[1421]: I0317 19:54:52.615414 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584" (UID: "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 19:54:52.615756 kubelet[1421]: I0317 19:54:52.615696 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584" (UID: "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:54:52.616198 kubelet[1421]: I0317 19:54:52.616160 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584" (UID: "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:54:52.617662 kubelet[1421]: I0317 19:54:52.617623 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584" (UID: "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:54:52.617925 kubelet[1421]: I0317 19:54:52.617887 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-hostproc" (OuterVolumeSpecName: "hostproc") pod "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584" (UID: "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:54:52.621713 kubelet[1421]: I0317 19:54:52.621664 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584" (UID: "e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 19:54:52.671499 systemd[1]: Removed slice kubepods-burstable-pode6ad1eb3_bc22_4ce9_8f97_065d4f8eb584.slice. Mar 17 19:54:52.697629 kubelet[1421]: I0317 19:54:52.697561 1421 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-cni-path\") on node \"172.24.4.126\" DevicePath \"\"" Mar 17 19:54:52.697629 kubelet[1421]: I0317 19:54:52.697588 1421 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-lib-modules\") on node \"172.24.4.126\" DevicePath \"\"" Mar 17 19:54:52.697629 kubelet[1421]: I0317 19:54:52.697599 1421 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-clustermesh-secrets\") on node \"172.24.4.126\" DevicePath \"\"" Mar 17 19:54:52.697629 kubelet[1421]: I0317 19:54:52.697614 1421 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-host-proc-sys-net\") on node \"172.24.4.126\" DevicePath \"\"" Mar 17 19:54:52.697629 kubelet[1421]: I0317 19:54:52.697624 1421 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rflgb\" (UniqueName: \"kubernetes.io/projected/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-kube-api-access-rflgb\") on node \"172.24.4.126\" DevicePath \"\"" Mar 17 19:54:52.697629 kubelet[1421]: I0317 19:54:52.697633 1421 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-hostproc\") on node \"172.24.4.126\" DevicePath \"\"" Mar 17 19:54:52.697629 kubelet[1421]: I0317 19:54:52.697644 1421 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-hubble-tls\") on node \"172.24.4.126\" DevicePath \"\"" Mar 17 19:54:52.698204 kubelet[1421]: I0317 19:54:52.697655 1421 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-cilium-ipsec-secrets\") on node \"172.24.4.126\" DevicePath \"\"" Mar 17 19:54:52.698204 kubelet[1421]: I0317 19:54:52.697665 1421 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-bpf-maps\") on node \"172.24.4.126\" DevicePath \"\"" Mar 17 19:54:52.698204 kubelet[1421]: I0317 19:54:52.697674 1421 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-cilium-cgroup\") on node \"172.24.4.126\" DevicePath \"\"" Mar 17 19:54:52.698204 kubelet[1421]: I0317 19:54:52.697683 1421 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-cilium-config-path\") on node \"172.24.4.126\" DevicePath \"\"" Mar 17 19:54:52.698204 kubelet[1421]: I0317 19:54:52.697692 1421 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-xtables-lock\") on node \"172.24.4.126\" DevicePath \"\"" Mar 17 19:54:52.698204 kubelet[1421]: I0317 19:54:52.697701 1421 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-etc-cni-netd\") on node \"172.24.4.126\" DevicePath \"\"" Mar 17 19:54:52.698204 kubelet[1421]: I0317 19:54:52.697710 1421 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-cilium-run\") on node \"172.24.4.126\" DevicePath \"\"" Mar 17 19:54:52.698204 kubelet[1421]: I0317 19:54:52.697719 1421 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584-host-proc-sys-kernel\") on node \"172.24.4.126\" DevicePath \"\"" Mar 17 19:54:53.052104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a63f5f5da85646f3dce47b11fcf805b5fb5503900f91ff502070670de58c2685-rootfs.mount: Deactivated successfully. Mar 17 19:54:53.052322 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a63f5f5da85646f3dce47b11fcf805b5fb5503900f91ff502070670de58c2685-shm.mount: Deactivated successfully. Mar 17 19:54:53.052533 systemd[1]: var-lib-kubelet-pods-e6ad1eb3\x2dbc22\x2d4ce9\x2d8f97\x2d065d4f8eb584-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Mar 17 19:54:53.052689 systemd[1]: var-lib-kubelet-pods-e6ad1eb3\x2dbc22\x2d4ce9\x2d8f97\x2d065d4f8eb584-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drflgb.mount: Deactivated successfully. Mar 17 19:54:53.052844 systemd[1]: var-lib-kubelet-pods-e6ad1eb3\x2dbc22\x2d4ce9\x2d8f97\x2d065d4f8eb584-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 19:54:53.052987 systemd[1]: var-lib-kubelet-pods-e6ad1eb3\x2dbc22\x2d4ce9\x2d8f97\x2d065d4f8eb584-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 19:54:53.133478 kubelet[1421]: I0317 19:54:53.133425 1421 scope.go:117] "RemoveContainer" containerID="2f35869db4d2d9b5ceec05c03e3b645eedac95cf30145e7cf63f84ef6ce3b1c9" Mar 17 19:54:53.139816 env[1154]: time="2025-03-17T19:54:53.139754706Z" level=info msg="RemoveContainer for \"2f35869db4d2d9b5ceec05c03e3b645eedac95cf30145e7cf63f84ef6ce3b1c9\"" Mar 17 19:54:53.142716 kubelet[1421]: I0317 19:54:53.142628 1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-n5gcw" podStartSLOduration=2.593870809 podStartE2EDuration="5.142595249s" podCreationTimestamp="2025-03-17 19:54:48 +0000 UTC" firstStartedPulling="2025-03-17 19:54:49.484796818 +0000 UTC m=+85.882459778" lastFinishedPulling="2025-03-17 19:54:52.033521208 +0000 UTC m=+88.431184218" observedRunningTime="2025-03-17 19:54:53.142304248 +0000 UTC m=+89.539967268" watchObservedRunningTime="2025-03-17 19:54:53.142595249 +0000 UTC m=+89.540258259" Mar 17 19:54:53.148140 env[1154]: time="2025-03-17T19:54:53.147995191Z" level=info msg="RemoveContainer for \"2f35869db4d2d9b5ceec05c03e3b645eedac95cf30145e7cf63f84ef6ce3b1c9\" returns successfully" Mar 17 19:54:53.185675 kubelet[1421]: I0317 19:54:53.185595 1421 topology_manager.go:215] "Topology Admit Handler" podUID="0d59e984-3d80-47a4-928a-5e528fea2d8f" podNamespace="kube-system" podName="cilium-kq4wm" Mar 17 19:54:53.185905 kubelet[1421]: E0317 19:54:53.185695 1421 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584" containerName="mount-cgroup" Mar 17 19:54:53.185905 kubelet[1421]: E0317 19:54:53.185719 1421 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584" containerName="mount-cgroup" Mar 17 19:54:53.185905 kubelet[1421]: I0317 19:54:53.185764 1421 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584" containerName="mount-cgroup" Mar 17 19:54:53.185905 kubelet[1421]: I0317 19:54:53.185779 1421 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584" containerName="mount-cgroup" Mar 17 19:54:53.198022 systemd[1]: Created slice kubepods-burstable-pod0d59e984_3d80_47a4_928a_5e528fea2d8f.slice. Mar 17 19:54:53.301410 kubelet[1421]: I0317 19:54:53.301325 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0d59e984-3d80-47a4-928a-5e528fea2d8f-bpf-maps\") pod \"cilium-kq4wm\" (UID: \"0d59e984-3d80-47a4-928a-5e528fea2d8f\") " pod="kube-system/cilium-kq4wm" Mar 17 19:54:53.301869 kubelet[1421]: I0317 19:54:53.301836 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0d59e984-3d80-47a4-928a-5e528fea2d8f-hostproc\") pod \"cilium-kq4wm\" (UID: \"0d59e984-3d80-47a4-928a-5e528fea2d8f\") " pod="kube-system/cilium-kq4wm" Mar 17 19:54:53.302264 kubelet[1421]: I0317 19:54:53.302225 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0d59e984-3d80-47a4-928a-5e528fea2d8f-cni-path\") pod \"cilium-kq4wm\" (UID: \"0d59e984-3d80-47a4-928a-5e528fea2d8f\") " pod="kube-system/cilium-kq4wm" Mar 17 19:54:53.302698 kubelet[1421]: I0317 19:54:53.302621 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0d59e984-3d80-47a4-928a-5e528fea2d8f-clustermesh-secrets\") pod \"cilium-kq4wm\" (UID: \"0d59e984-3d80-47a4-928a-5e528fea2d8f\") " pod="kube-system/cilium-kq4wm" Mar 17 19:54:53.303147 kubelet[1421]: I0317 19:54:53.302999 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d59e984-3d80-47a4-928a-5e528fea2d8f-cilium-config-path\") pod \"cilium-kq4wm\" (UID: \"0d59e984-3d80-47a4-928a-5e528fea2d8f\") " pod="kube-system/cilium-kq4wm" Mar 17 19:54:53.304230 kubelet[1421]: I0317 19:54:53.304133 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0d59e984-3d80-47a4-928a-5e528fea2d8f-cilium-cgroup\") pod \"cilium-kq4wm\" (UID: \"0d59e984-3d80-47a4-928a-5e528fea2d8f\") " pod="kube-system/cilium-kq4wm" Mar 17 19:54:53.304683 kubelet[1421]: I0317 19:54:53.304618 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0d59e984-3d80-47a4-928a-5e528fea2d8f-cilium-ipsec-secrets\") pod \"cilium-kq4wm\" (UID: \"0d59e984-3d80-47a4-928a-5e528fea2d8f\") " pod="kube-system/cilium-kq4wm" Mar 17 19:54:53.305063 kubelet[1421]: I0317 19:54:53.304987 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0d59e984-3d80-47a4-928a-5e528fea2d8f-host-proc-sys-net\") pod \"cilium-kq4wm\" (UID: \"0d59e984-3d80-47a4-928a-5e528fea2d8f\") " pod="kube-system/cilium-kq4wm" Mar 17 19:54:53.306542 kubelet[1421]: I0317 19:54:53.306461 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0d59e984-3d80-47a4-928a-5e528fea2d8f-host-proc-sys-kernel\") pod \"cilium-kq4wm\" (UID: \"0d59e984-3d80-47a4-928a-5e528fea2d8f\") " pod="kube-system/cilium-kq4wm" Mar 17 19:54:53.306682 kubelet[1421]: I0317 19:54:53.306592 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0d59e984-3d80-47a4-928a-5e528fea2d8f-hubble-tls\") pod \"cilium-kq4wm\" (UID: \"0d59e984-3d80-47a4-928a-5e528fea2d8f\") " pod="kube-system/cilium-kq4wm" Mar 17 19:54:53.306682 kubelet[1421]: I0317 19:54:53.306641 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0d59e984-3d80-47a4-928a-5e528fea2d8f-cilium-run\") pod \"cilium-kq4wm\" (UID: \"0d59e984-3d80-47a4-928a-5e528fea2d8f\") " pod="kube-system/cilium-kq4wm" Mar 17 19:54:53.306839 kubelet[1421]: I0317 19:54:53.306684 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0d59e984-3d80-47a4-928a-5e528fea2d8f-etc-cni-netd\") pod \"cilium-kq4wm\" (UID: \"0d59e984-3d80-47a4-928a-5e528fea2d8f\") " pod="kube-system/cilium-kq4wm" Mar 17 19:54:53.306839 kubelet[1421]: I0317 19:54:53.306726 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d59e984-3d80-47a4-928a-5e528fea2d8f-lib-modules\") pod \"cilium-kq4wm\" (UID: \"0d59e984-3d80-47a4-928a-5e528fea2d8f\") " pod="kube-system/cilium-kq4wm" Mar 17 19:54:53.306839 kubelet[1421]: I0317 19:54:53.306771 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d59e984-3d80-47a4-928a-5e528fea2d8f-xtables-lock\") pod \"cilium-kq4wm\" (UID: \"0d59e984-3d80-47a4-928a-5e528fea2d8f\") " pod="kube-system/cilium-kq4wm" Mar 17 19:54:53.306839 kubelet[1421]: I0317 19:54:53.306813 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7jv7\" (UniqueName: \"kubernetes.io/projected/0d59e984-3d80-47a4-928a-5e528fea2d8f-kube-api-access-g7jv7\") pod \"cilium-kq4wm\" (UID: \"0d59e984-3d80-47a4-928a-5e528fea2d8f\") " pod="kube-system/cilium-kq4wm" Mar 17 19:54:53.510979 env[1154]: time="2025-03-17T19:54:53.510870280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kq4wm,Uid:0d59e984-3d80-47a4-928a-5e528fea2d8f,Namespace:kube-system,Attempt:0,}" Mar 17 19:54:53.537664 env[1154]: time="2025-03-17T19:54:53.537478264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 19:54:53.538167 env[1154]: time="2025-03-17T19:54:53.538062658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 19:54:53.538477 env[1154]: time="2025-03-17T19:54:53.538416154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 19:54:53.539125 env[1154]: time="2025-03-17T19:54:53.539055832Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9cddd458a2ac23bc8b460d4f4c28988914a1c1a45b03c0118b4f90886b5dfab2 pid=3179 runtime=io.containerd.runc.v2 Mar 17 19:54:53.543787 kubelet[1421]: E0317 19:54:53.543653 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:53.572019 systemd[1]: Started cri-containerd-9cddd458a2ac23bc8b460d4f4c28988914a1c1a45b03c0118b4f90886b5dfab2.scope. Mar 17 19:54:53.620219 env[1154]: time="2025-03-17T19:54:53.620179445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kq4wm,Uid:0d59e984-3d80-47a4-928a-5e528fea2d8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"9cddd458a2ac23bc8b460d4f4c28988914a1c1a45b03c0118b4f90886b5dfab2\"" Mar 17 19:54:53.623167 env[1154]: time="2025-03-17T19:54:53.623139609Z" level=info msg="CreateContainer within sandbox \"9cddd458a2ac23bc8b460d4f4c28988914a1c1a45b03c0118b4f90886b5dfab2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 19:54:53.643205 env[1154]: time="2025-03-17T19:54:53.643088875Z" level=info msg="CreateContainer within sandbox \"9cddd458a2ac23bc8b460d4f4c28988914a1c1a45b03c0118b4f90886b5dfab2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ab98b8279106b9dde820a5db0d37d54351279f554d3493fa3632af460bca3bf8\"" Mar 17 19:54:53.643950 env[1154]: time="2025-03-17T19:54:53.643924957Z" level=info msg="StartContainer for \"ab98b8279106b9dde820a5db0d37d54351279f554d3493fa3632af460bca3bf8\"" Mar 17 19:54:53.658846 systemd[1]: Started cri-containerd-ab98b8279106b9dde820a5db0d37d54351279f554d3493fa3632af460bca3bf8.scope. Mar 17 19:54:53.705471 env[1154]: time="2025-03-17T19:54:53.705432864Z" level=info msg="StartContainer for \"ab98b8279106b9dde820a5db0d37d54351279f554d3493fa3632af460bca3bf8\" returns successfully" Mar 17 19:54:53.709025 systemd[1]: cri-containerd-ab98b8279106b9dde820a5db0d37d54351279f554d3493fa3632af460bca3bf8.scope: Deactivated successfully. Mar 17 19:54:53.739558 env[1154]: time="2025-03-17T19:54:53.739516472Z" level=info msg="shim disconnected" id=ab98b8279106b9dde820a5db0d37d54351279f554d3493fa3632af460bca3bf8 Mar 17 19:54:53.739909 env[1154]: time="2025-03-17T19:54:53.739889565Z" level=warning msg="cleaning up after shim disconnected" id=ab98b8279106b9dde820a5db0d37d54351279f554d3493fa3632af460bca3bf8 namespace=k8s.io Mar 17 19:54:53.739989 env[1154]: time="2025-03-17T19:54:53.739973451Z" level=info msg="cleaning up dead shim" Mar 17 19:54:53.750039 env[1154]: time="2025-03-17T19:54:53.749973872Z" level=warning msg="cleanup warnings time=\"2025-03-17T19:54:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3263 runtime=io.containerd.runc.v2\n" Mar 17 19:54:54.145642 env[1154]: time="2025-03-17T19:54:54.145551599Z" level=info msg="CreateContainer within sandbox \"9cddd458a2ac23bc8b460d4f4c28988914a1c1a45b03c0118b4f90886b5dfab2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 19:54:54.187605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2335369488.mount: Deactivated successfully. Mar 17 19:54:54.195310 env[1154]: time="2025-03-17T19:54:54.195209887Z" level=info msg="CreateContainer within sandbox \"9cddd458a2ac23bc8b460d4f4c28988914a1c1a45b03c0118b4f90886b5dfab2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f1cc7ad0fb8b67d09046fead00fda1dab2b0b507be5c5b2d23c5ed1c3e135394\"" Mar 17 19:54:54.197048 env[1154]: time="2025-03-17T19:54:54.196979582Z" level=info msg="StartContainer for \"f1cc7ad0fb8b67d09046fead00fda1dab2b0b507be5c5b2d23c5ed1c3e135394\"" Mar 17 19:54:54.244790 systemd[1]: Started cri-containerd-f1cc7ad0fb8b67d09046fead00fda1dab2b0b507be5c5b2d23c5ed1c3e135394.scope. Mar 17 19:54:54.286791 env[1154]: time="2025-03-17T19:54:54.286743734Z" level=info msg="StartContainer for \"f1cc7ad0fb8b67d09046fead00fda1dab2b0b507be5c5b2d23c5ed1c3e135394\" returns successfully" Mar 17 19:54:54.291451 systemd[1]: cri-containerd-f1cc7ad0fb8b67d09046fead00fda1dab2b0b507be5c5b2d23c5ed1c3e135394.scope: Deactivated successfully. Mar 17 19:54:54.316673 env[1154]: time="2025-03-17T19:54:54.316618766Z" level=info msg="shim disconnected" id=f1cc7ad0fb8b67d09046fead00fda1dab2b0b507be5c5b2d23c5ed1c3e135394 Mar 17 19:54:54.316673 env[1154]: time="2025-03-17T19:54:54.316666083Z" level=warning msg="cleaning up after shim disconnected" id=f1cc7ad0fb8b67d09046fead00fda1dab2b0b507be5c5b2d23c5ed1c3e135394 namespace=k8s.io Mar 17 19:54:54.316673 env[1154]: time="2025-03-17T19:54:54.316677936Z" level=info msg="cleaning up dead shim" Mar 17 19:54:54.324517 env[1154]: time="2025-03-17T19:54:54.324464801Z" level=warning msg="cleanup warnings time=\"2025-03-17T19:54:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3328 runtime=io.containerd.runc.v2\n" Mar 17 19:54:54.543929 kubelet[1421]: E0317 19:54:54.543872 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:54.606131 kubelet[1421]: E0317 19:54:54.606019 1421 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 19:54:54.669454 kubelet[1421]: I0317 19:54:54.669343 1421 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584" path="/var/lib/kubelet/pods/e6ad1eb3-bc22-4ce9-8f97-065d4f8eb584/volumes" Mar 17 19:54:55.053021 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1cc7ad0fb8b67d09046fead00fda1dab2b0b507be5c5b2d23c5ed1c3e135394-rootfs.mount: Deactivated successfully. Mar 17 19:54:55.152205 env[1154]: time="2025-03-17T19:54:55.152137007Z" level=info msg="CreateContainer within sandbox \"9cddd458a2ac23bc8b460d4f4c28988914a1c1a45b03c0118b4f90886b5dfab2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 19:54:55.190296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3537180742.mount: Deactivated successfully. Mar 17 19:54:55.207782 env[1154]: time="2025-03-17T19:54:55.207689172Z" level=info msg="CreateContainer within sandbox \"9cddd458a2ac23bc8b460d4f4c28988914a1c1a45b03c0118b4f90886b5dfab2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"24e4fd461ac15c5f95c8c0cb7fb809858d523458b3309ea7c028e40729ddb522\"" Mar 17 19:54:55.209182 env[1154]: time="2025-03-17T19:54:55.209132151Z" level=info msg="StartContainer for \"24e4fd461ac15c5f95c8c0cb7fb809858d523458b3309ea7c028e40729ddb522\"" Mar 17 19:54:55.257315 systemd[1]: Started cri-containerd-24e4fd461ac15c5f95c8c0cb7fb809858d523458b3309ea7c028e40729ddb522.scope. Mar 17 19:54:55.305339 systemd[1]: cri-containerd-24e4fd461ac15c5f95c8c0cb7fb809858d523458b3309ea7c028e40729ddb522.scope: Deactivated successfully. Mar 17 19:54:55.307447 env[1154]: time="2025-03-17T19:54:55.307088855Z" level=info msg="StartContainer for \"24e4fd461ac15c5f95c8c0cb7fb809858d523458b3309ea7c028e40729ddb522\" returns successfully" Mar 17 19:54:55.338691 env[1154]: time="2025-03-17T19:54:55.338630184Z" level=info msg="shim disconnected" id=24e4fd461ac15c5f95c8c0cb7fb809858d523458b3309ea7c028e40729ddb522 Mar 17 19:54:55.338691 env[1154]: time="2025-03-17T19:54:55.338678034Z" level=warning msg="cleaning up after shim disconnected" id=24e4fd461ac15c5f95c8c0cb7fb809858d523458b3309ea7c028e40729ddb522 namespace=k8s.io Mar 17 19:54:55.338691 env[1154]: time="2025-03-17T19:54:55.338689364Z" level=info msg="cleaning up dead shim" Mar 17 19:54:55.346841 env[1154]: time="2025-03-17T19:54:55.346796659Z" level=warning msg="cleanup warnings time=\"2025-03-17T19:54:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3386 runtime=io.containerd.runc.v2\n" Mar 17 19:54:55.545410 kubelet[1421]: E0317 19:54:55.545245 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:55.698678 kubelet[1421]: W0317 19:54:55.697606 1421 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6ad1eb3_bc22_4ce9_8f97_065d4f8eb584.slice/cri-containerd-2f35869db4d2d9b5ceec05c03e3b645eedac95cf30145e7cf63f84ef6ce3b1c9.scope WatchSource:0}: container "2f35869db4d2d9b5ceec05c03e3b645eedac95cf30145e7cf63f84ef6ce3b1c9" in namespace "k8s.io": not found Mar 17 19:54:56.053216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24e4fd461ac15c5f95c8c0cb7fb809858d523458b3309ea7c028e40729ddb522-rootfs.mount: Deactivated successfully. Mar 17 19:54:56.164961 env[1154]: time="2025-03-17T19:54:56.164894716Z" level=info msg="CreateContainer within sandbox \"9cddd458a2ac23bc8b460d4f4c28988914a1c1a45b03c0118b4f90886b5dfab2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 19:54:56.200880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1353012013.mount: Deactivated successfully. Mar 17 19:54:56.212478 env[1154]: time="2025-03-17T19:54:56.212328726Z" level=info msg="CreateContainer within sandbox \"9cddd458a2ac23bc8b460d4f4c28988914a1c1a45b03c0118b4f90886b5dfab2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2db32e5e5b920d34e0a934e7fb8f864f0d0b7c806c095277fce7acdc8ca6b2a3\"" Mar 17 19:54:56.213280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount40651149.mount: Deactivated successfully. Mar 17 19:54:56.214099 env[1154]: time="2025-03-17T19:54:56.214016882Z" level=info msg="StartContainer for \"2db32e5e5b920d34e0a934e7fb8f864f0d0b7c806c095277fce7acdc8ca6b2a3\"" Mar 17 19:54:56.247863 systemd[1]: Started cri-containerd-2db32e5e5b920d34e0a934e7fb8f864f0d0b7c806c095277fce7acdc8ca6b2a3.scope. Mar 17 19:54:56.280148 systemd[1]: cri-containerd-2db32e5e5b920d34e0a934e7fb8f864f0d0b7c806c095277fce7acdc8ca6b2a3.scope: Deactivated successfully. Mar 17 19:54:56.283079 env[1154]: time="2025-03-17T19:54:56.283049328Z" level=info msg="StartContainer for \"2db32e5e5b920d34e0a934e7fb8f864f0d0b7c806c095277fce7acdc8ca6b2a3\" returns successfully" Mar 17 19:54:56.305807 env[1154]: time="2025-03-17T19:54:56.305713833Z" level=info msg="shim disconnected" id=2db32e5e5b920d34e0a934e7fb8f864f0d0b7c806c095277fce7acdc8ca6b2a3 Mar 17 19:54:56.306025 env[1154]: time="2025-03-17T19:54:56.306004403Z" level=warning msg="cleaning up after shim disconnected" id=2db32e5e5b920d34e0a934e7fb8f864f0d0b7c806c095277fce7acdc8ca6b2a3 namespace=k8s.io Mar 17 19:54:56.306125 env[1154]: time="2025-03-17T19:54:56.306109508Z" level=info msg="cleaning up dead shim" Mar 17 19:54:56.314010 env[1154]: time="2025-03-17T19:54:56.313973193Z" level=warning msg="cleanup warnings time=\"2025-03-17T19:54:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3442 runtime=io.containerd.runc.v2\n" Mar 17 19:54:56.545528 kubelet[1421]: E0317 19:54:56.545446 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:57.053477 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2db32e5e5b920d34e0a934e7fb8f864f0d0b7c806c095277fce7acdc8ca6b2a3-rootfs.mount: Deactivated successfully. Mar 17 19:54:57.170793 env[1154]: time="2025-03-17T19:54:57.170704521Z" level=info msg="CreateContainer within sandbox \"9cddd458a2ac23bc8b460d4f4c28988914a1c1a45b03c0118b4f90886b5dfab2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 19:54:57.208669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1339632439.mount: Deactivated successfully. Mar 17 19:54:57.227625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2705295503.mount: Deactivated successfully. Mar 17 19:54:57.230553 env[1154]: time="2025-03-17T19:54:57.230484597Z" level=info msg="CreateContainer within sandbox \"9cddd458a2ac23bc8b460d4f4c28988914a1c1a45b03c0118b4f90886b5dfab2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d0c5a9b2ce5bafd9fc3ab8cbd3687f026b690b577b719acd192caabe9c194053\"" Mar 17 19:54:57.233100 env[1154]: time="2025-03-17T19:54:57.233050684Z" level=info msg="StartContainer for \"d0c5a9b2ce5bafd9fc3ab8cbd3687f026b690b577b719acd192caabe9c194053\"" Mar 17 19:54:57.270094 systemd[1]: Started cri-containerd-d0c5a9b2ce5bafd9fc3ab8cbd3687f026b690b577b719acd192caabe9c194053.scope. Mar 17 19:54:57.308452 env[1154]: time="2025-03-17T19:54:57.308330636Z" level=info msg="StartContainer for \"d0c5a9b2ce5bafd9fc3ab8cbd3687f026b690b577b719acd192caabe9c194053\" returns successfully" Mar 17 19:54:57.546281 kubelet[1421]: E0317 19:54:57.546220 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:57.650482 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 19:54:57.700471 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Mar 17 19:54:58.198287 kubelet[1421]: I0317 19:54:58.198076 1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kq4wm" podStartSLOduration=5.198010339 podStartE2EDuration="5.198010339s" podCreationTimestamp="2025-03-17 19:54:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 19:54:58.196785553 +0000 UTC m=+94.594448613" watchObservedRunningTime="2025-03-17 19:54:58.198010339 +0000 UTC m=+94.595673379" Mar 17 19:54:58.546962 kubelet[1421]: E0317 19:54:58.546902 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:58.826752 kubelet[1421]: W0317 19:54:58.826127 1421 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d59e984_3d80_47a4_928a_5e528fea2d8f.slice/cri-containerd-ab98b8279106b9dde820a5db0d37d54351279f554d3493fa3632af460bca3bf8.scope WatchSource:0}: task ab98b8279106b9dde820a5db0d37d54351279f554d3493fa3632af460bca3bf8 not found: not found Mar 17 19:54:59.547562 kubelet[1421]: E0317 19:54:59.547486 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:54:59.593025 systemd[1]: run-containerd-runc-k8s.io-d0c5a9b2ce5bafd9fc3ab8cbd3687f026b690b577b719acd192caabe9c194053-runc.0bLdcu.mount: Deactivated successfully. Mar 17 19:55:00.547875 kubelet[1421]: E0317 19:55:00.547816 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:55:00.807511 systemd-networkd[989]: lxc_health: Link UP Mar 17 19:55:00.817853 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 19:55:00.817433 systemd-networkd[989]: lxc_health: Gained carrier Mar 17 19:55:01.548766 kubelet[1421]: E0317 19:55:01.548730 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:55:01.737040 systemd[1]: run-containerd-runc-k8s.io-d0c5a9b2ce5bafd9fc3ab8cbd3687f026b690b577b719acd192caabe9c194053-runc.ehIpJ9.mount: Deactivated successfully. Mar 17 19:55:01.940093 kubelet[1421]: W0317 19:55:01.939979 1421 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d59e984_3d80_47a4_928a_5e528fea2d8f.slice/cri-containerd-f1cc7ad0fb8b67d09046fead00fda1dab2b0b507be5c5b2d23c5ed1c3e135394.scope WatchSource:0}: task f1cc7ad0fb8b67d09046fead00fda1dab2b0b507be5c5b2d23c5ed1c3e135394 not found: not found Mar 17 19:55:02.549110 kubelet[1421]: E0317 19:55:02.549049 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:55:02.658690 systemd-networkd[989]: lxc_health: Gained IPv6LL Mar 17 19:55:03.551049 kubelet[1421]: E0317 19:55:03.551016 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:55:04.003730 systemd[1]: run-containerd-runc-k8s.io-d0c5a9b2ce5bafd9fc3ab8cbd3687f026b690b577b719acd192caabe9c194053-runc.Pm43y2.mount: Deactivated successfully. Mar 17 19:55:04.467302 kubelet[1421]: E0317 19:55:04.467184 1421 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:55:04.552447 kubelet[1421]: E0317 19:55:04.552389 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:55:05.049639 kubelet[1421]: W0317 19:55:05.049581 1421 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d59e984_3d80_47a4_928a_5e528fea2d8f.slice/cri-containerd-24e4fd461ac15c5f95c8c0cb7fb809858d523458b3309ea7c028e40729ddb522.scope WatchSource:0}: task 24e4fd461ac15c5f95c8c0cb7fb809858d523458b3309ea7c028e40729ddb522 not found: not found Mar 17 19:55:05.553441 kubelet[1421]: E0317 19:55:05.553343 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:55:06.230474 systemd[1]: run-containerd-runc-k8s.io-d0c5a9b2ce5bafd9fc3ab8cbd3687f026b690b577b719acd192caabe9c194053-runc.juyh1U.mount: Deactivated successfully. Mar 17 19:55:06.554671 kubelet[1421]: E0317 19:55:06.554591 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:55:07.555844 kubelet[1421]: E0317 19:55:07.555722 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:55:08.161503 kubelet[1421]: W0317 19:55:08.161444 1421 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d59e984_3d80_47a4_928a_5e528fea2d8f.slice/cri-containerd-2db32e5e5b920d34e0a934e7fb8f864f0d0b7c806c095277fce7acdc8ca6b2a3.scope WatchSource:0}: task 2db32e5e5b920d34e0a934e7fb8f864f0d0b7c806c095277fce7acdc8ca6b2a3 not found: not found Mar 17 19:55:08.557722 kubelet[1421]: E0317 19:55:08.557671 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:55:09.559177 kubelet[1421]: E0317 19:55:09.559105 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:55:10.560029 kubelet[1421]: E0317 19:55:10.559957 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:55:11.560417 kubelet[1421]: E0317 19:55:11.560319 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:55:12.562270 kubelet[1421]: E0317 19:55:12.562195 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:55:13.563150 kubelet[1421]: E0317 19:55:13.562985 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 19:55:14.564632 kubelet[1421]: E0317 19:55:14.564541 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"