May 9 00:39:52.888556 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu May 8 22:52:37 -00 2025 May 9 00:39:52.888578 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56b660b06ded103a15fe25ebfbdecb898a20f374e429fec465c69b1a75d59c4b May 9 00:39:52.888589 kernel: BIOS-provided physical RAM map: May 9 00:39:52.888596 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 9 00:39:52.888604 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 9 00:39:52.888612 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 9 00:39:52.888622 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 9 00:39:52.888628 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 9 00:39:52.888634 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 9 00:39:52.888643 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 9 00:39:52.888649 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 9 00:39:52.888656 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 9 00:39:52.888662 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 9 00:39:52.888668 kernel: NX (Execute Disable) protection: active May 9 00:39:52.888676 kernel: APIC: Static calls initialized May 9 00:39:52.888685 kernel: SMBIOS 2.8 present. May 9 00:39:52.888692 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 9 00:39:52.888699 kernel: Hypervisor detected: KVM May 9 00:39:52.888705 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 9 00:39:52.888712 kernel: kvm-clock: using sched offset of 2207122270 cycles May 9 00:39:52.888719 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 9 00:39:52.888726 kernel: tsc: Detected 2794.748 MHz processor May 9 00:39:52.888733 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 9 00:39:52.888740 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 9 00:39:52.888747 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 9 00:39:52.888757 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 9 00:39:52.888764 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 9 00:39:52.888771 kernel: Using GB pages for direct mapping May 9 00:39:52.888777 kernel: ACPI: Early table checksum verification disabled May 9 00:39:52.888785 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 9 00:39:52.888792 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:39:52.888799 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:39:52.888805 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:39:52.888814 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 9 00:39:52.888821 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:39:52.888828 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:39:52.888835 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:39:52.888842 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:39:52.888849 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 9 00:39:52.888856 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 9 00:39:52.888866 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 9 00:39:52.888876 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 9 00:39:52.888883 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 9 00:39:52.888890 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 9 00:39:52.888897 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 9 00:39:52.888904 kernel: No NUMA configuration found May 9 00:39:52.888918 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 9 00:39:52.888926 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 9 00:39:52.888936 kernel: Zone ranges: May 9 00:39:52.888943 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 9 00:39:52.888950 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 9 00:39:52.888957 kernel: Normal empty May 9 00:39:52.888964 kernel: Movable zone start for each node May 9 00:39:52.888971 kernel: Early memory node ranges May 9 00:39:52.888978 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 9 00:39:52.888985 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 9 00:39:52.888993 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 9 00:39:52.889002 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 9 00:39:52.889009 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 9 00:39:52.889016 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 9 00:39:52.889023 kernel: ACPI: PM-Timer IO Port: 0x608 May 9 00:39:52.889031 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 9 00:39:52.889038 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 9 00:39:52.889045 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 9 00:39:52.889052 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 9 00:39:52.889059 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 9 00:39:52.889068 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 9 00:39:52.889075 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 9 00:39:52.889083 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 9 00:39:52.889090 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 9 00:39:52.889097 kernel: TSC deadline timer available May 9 00:39:52.889104 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 9 00:39:52.889111 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 9 00:39:52.889118 kernel: kvm-guest: KVM setup pv remote TLB flush May 9 00:39:52.889125 kernel: kvm-guest: setup PV sched yield May 9 00:39:52.889132 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 9 00:39:52.889142 kernel: Booting paravirtualized kernel on KVM May 9 00:39:52.889149 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 9 00:39:52.889156 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 9 00:39:52.889163 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 9 00:39:52.889171 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 9 00:39:52.889177 kernel: pcpu-alloc: [0] 0 1 2 3 May 9 00:39:52.889184 kernel: kvm-guest: PV spinlocks enabled May 9 00:39:52.889192 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 9 00:39:52.889200 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56b660b06ded103a15fe25ebfbdecb898a20f374e429fec465c69b1a75d59c4b May 9 00:39:52.889210 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 00:39:52.889217 kernel: random: crng init done May 9 00:39:52.889225 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 00:39:52.889232 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 00:39:52.889239 kernel: Fallback order for Node 0: 0 May 9 00:39:52.889246 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 9 00:39:52.889253 kernel: Policy zone: DMA32 May 9 00:39:52.889260 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 00:39:52.889270 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 136900K reserved, 0K cma-reserved) May 9 00:39:52.889278 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 9 00:39:52.889285 kernel: ftrace: allocating 37944 entries in 149 pages May 9 00:39:52.889292 kernel: ftrace: allocated 149 pages with 4 groups May 9 00:39:52.889299 kernel: Dynamic Preempt: voluntary May 9 00:39:52.889306 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 00:39:52.889314 kernel: rcu: RCU event tracing is enabled. May 9 00:39:52.889321 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 9 00:39:52.889329 kernel: Trampoline variant of Tasks RCU enabled. May 9 00:39:52.889338 kernel: Rude variant of Tasks RCU enabled. May 9 00:39:52.889346 kernel: Tracing variant of Tasks RCU enabled. May 9 00:39:52.889353 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 00:39:52.889360 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 9 00:39:52.889367 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 9 00:39:52.889374 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 00:39:52.889393 kernel: Console: colour VGA+ 80x25 May 9 00:39:52.889400 kernel: printk: console [ttyS0] enabled May 9 00:39:52.889407 kernel: ACPI: Core revision 20230628 May 9 00:39:52.889418 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 9 00:39:52.889425 kernel: APIC: Switch to symmetric I/O mode setup May 9 00:39:52.889432 kernel: x2apic enabled May 9 00:39:52.889439 kernel: APIC: Switched APIC routing to: physical x2apic May 9 00:39:52.889446 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 9 00:39:52.889454 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 9 00:39:52.889461 kernel: kvm-guest: setup PV IPIs May 9 00:39:52.889478 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 9 00:39:52.889485 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 9 00:39:52.889493 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 9 00:39:52.889500 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 9 00:39:52.889507 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 9 00:39:52.889517 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 9 00:39:52.889525 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 9 00:39:52.889532 kernel: Spectre V2 : Mitigation: Retpolines May 9 00:39:52.889540 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 9 00:39:52.889547 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 9 00:39:52.889557 kernel: RETBleed: Mitigation: untrained return thunk May 9 00:39:52.889565 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 9 00:39:52.889572 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 9 00:39:52.889580 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 9 00:39:52.889588 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 9 00:39:52.889597 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 9 00:39:52.889607 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 9 00:39:52.889617 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 9 00:39:52.889630 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 9 00:39:52.889637 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 9 00:39:52.889645 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 9 00:39:52.889652 kernel: Freeing SMP alternatives memory: 32K May 9 00:39:52.889660 kernel: pid_max: default: 32768 minimum: 301 May 9 00:39:52.889667 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 00:39:52.889675 kernel: landlock: Up and running. May 9 00:39:52.889682 kernel: SELinux: Initializing. May 9 00:39:52.889690 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 00:39:52.889700 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 00:39:52.889707 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 9 00:39:52.889715 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:39:52.889722 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:39:52.889730 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:39:52.889738 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 9 00:39:52.889745 kernel: ... version: 0 May 9 00:39:52.889753 kernel: ... bit width: 48 May 9 00:39:52.889760 kernel: ... generic registers: 6 May 9 00:39:52.889770 kernel: ... value mask: 0000ffffffffffff May 9 00:39:52.889777 kernel: ... max period: 00007fffffffffff May 9 00:39:52.889785 kernel: ... fixed-purpose events: 0 May 9 00:39:52.889792 kernel: ... event mask: 000000000000003f May 9 00:39:52.889799 kernel: signal: max sigframe size: 1776 May 9 00:39:52.889807 kernel: rcu: Hierarchical SRCU implementation. May 9 00:39:52.889814 kernel: rcu: Max phase no-delay instances is 400. May 9 00:39:52.889822 kernel: smp: Bringing up secondary CPUs ... May 9 00:39:52.889829 kernel: smpboot: x86: Booting SMP configuration: May 9 00:39:52.889839 kernel: .... node #0, CPUs: #1 #2 #3 May 9 00:39:52.889847 kernel: smp: Brought up 1 node, 4 CPUs May 9 00:39:52.889854 kernel: smpboot: Max logical packages: 1 May 9 00:39:52.889862 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 9 00:39:52.889869 kernel: devtmpfs: initialized May 9 00:39:52.889877 kernel: x86/mm: Memory block size: 128MB May 9 00:39:52.889884 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 00:39:52.889892 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 9 00:39:52.889899 kernel: pinctrl core: initialized pinctrl subsystem May 9 00:39:52.889917 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 00:39:52.889924 kernel: audit: initializing netlink subsys (disabled) May 9 00:39:52.889932 kernel: audit: type=2000 audit(1746751192.547:1): state=initialized audit_enabled=0 res=1 May 9 00:39:52.889939 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 00:39:52.889947 kernel: thermal_sys: Registered thermal governor 'user_space' May 9 00:39:52.889954 kernel: cpuidle: using governor menu May 9 00:39:52.889962 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 00:39:52.889969 kernel: dca service started, version 1.12.1 May 9 00:39:52.889977 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 9 00:39:52.889987 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 9 00:39:52.889994 kernel: PCI: Using configuration type 1 for base access May 9 00:39:52.890002 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 9 00:39:52.890009 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 00:39:52.890017 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 9 00:39:52.890025 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 00:39:52.890032 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 9 00:39:52.890040 kernel: ACPI: Added _OSI(Module Device) May 9 00:39:52.890047 kernel: ACPI: Added _OSI(Processor Device) May 9 00:39:52.890057 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 00:39:52.890065 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 00:39:52.890072 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 00:39:52.890080 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 9 00:39:52.890087 kernel: ACPI: Interpreter enabled May 9 00:39:52.890094 kernel: ACPI: PM: (supports S0 S3 S5) May 9 00:39:52.890102 kernel: ACPI: Using IOAPIC for interrupt routing May 9 00:39:52.890109 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 9 00:39:52.890117 kernel: PCI: Using E820 reservations for host bridge windows May 9 00:39:52.890127 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 9 00:39:52.890134 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 9 00:39:52.890320 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 9 00:39:52.890480 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 9 00:39:52.890607 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 9 00:39:52.890621 kernel: PCI host bridge to bus 0000:00 May 9 00:39:52.890748 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 9 00:39:52.890919 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 9 00:39:52.891038 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 9 00:39:52.891148 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 9 00:39:52.891256 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 9 00:39:52.891364 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 9 00:39:52.891497 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 9 00:39:52.891642 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 9 00:39:52.891779 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 9 00:39:52.891899 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 9 00:39:52.892031 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 9 00:39:52.892150 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 9 00:39:52.892269 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 9 00:39:52.892414 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 9 00:39:52.892543 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 9 00:39:52.892671 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 9 00:39:52.892792 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 9 00:39:52.892928 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 9 00:39:52.893050 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 9 00:39:52.893169 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 9 00:39:52.893287 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 9 00:39:52.893444 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 9 00:39:52.893567 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 9 00:39:52.893699 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 9 00:39:52.893819 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 9 00:39:52.893946 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 9 00:39:52.894075 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 9 00:39:52.894195 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 9 00:39:52.894329 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 9 00:39:52.894463 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 9 00:39:52.894584 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 9 00:39:52.894723 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 9 00:39:52.894844 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 9 00:39:52.894854 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 9 00:39:52.894862 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 9 00:39:52.894873 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 9 00:39:52.894881 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 9 00:39:52.894888 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 9 00:39:52.894896 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 9 00:39:52.894904 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 9 00:39:52.894920 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 9 00:39:52.894927 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 9 00:39:52.894935 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 9 00:39:52.894942 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 9 00:39:52.894952 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 9 00:39:52.894960 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 9 00:39:52.894967 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 9 00:39:52.894975 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 9 00:39:52.894983 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 9 00:39:52.894990 kernel: iommu: Default domain type: Translated May 9 00:39:52.894998 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 9 00:39:52.895005 kernel: PCI: Using ACPI for IRQ routing May 9 00:39:52.895012 kernel: PCI: pci_cache_line_size set to 64 bytes May 9 00:39:52.895022 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 9 00:39:52.895030 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 9 00:39:52.895151 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 9 00:39:52.895270 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 9 00:39:52.895414 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 9 00:39:52.895425 kernel: vgaarb: loaded May 9 00:39:52.895432 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 9 00:39:52.895440 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 9 00:39:52.895451 kernel: clocksource: Switched to clocksource kvm-clock May 9 00:39:52.895459 kernel: VFS: Disk quotas dquot_6.6.0 May 9 00:39:52.895466 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 00:39:52.895474 kernel: pnp: PnP ACPI init May 9 00:39:52.895611 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 9 00:39:52.895626 kernel: pnp: PnP ACPI: found 6 devices May 9 00:39:52.895634 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 9 00:39:52.895642 kernel: NET: Registered PF_INET protocol family May 9 00:39:52.895653 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 00:39:52.895660 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 00:39:52.895668 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 00:39:52.895676 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 00:39:52.895683 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 00:39:52.895691 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 00:39:52.895698 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 00:39:52.895706 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 00:39:52.895714 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 00:39:52.895723 kernel: NET: Registered PF_XDP protocol family May 9 00:39:52.895837 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 9 00:39:52.895955 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 9 00:39:52.896065 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 9 00:39:52.896175 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 9 00:39:52.896283 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 9 00:39:52.896410 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 9 00:39:52.896421 kernel: PCI: CLS 0 bytes, default 64 May 9 00:39:52.896440 kernel: Initialise system trusted keyrings May 9 00:39:52.896456 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 00:39:52.896471 kernel: Key type asymmetric registered May 9 00:39:52.896479 kernel: Asymmetric key parser 'x509' registered May 9 00:39:52.896487 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 9 00:39:52.896494 kernel: io scheduler mq-deadline registered May 9 00:39:52.896516 kernel: io scheduler kyber registered May 9 00:39:52.896524 kernel: io scheduler bfq registered May 9 00:39:52.896531 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 9 00:39:52.896543 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 9 00:39:52.896550 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 9 00:39:52.896558 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 9 00:39:52.896565 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 00:39:52.896573 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 9 00:39:52.896581 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 9 00:39:52.896588 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 9 00:39:52.896598 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 9 00:39:52.896733 kernel: rtc_cmos 00:04: RTC can wake from S4 May 9 00:39:52.896748 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 9 00:39:52.896861 kernel: rtc_cmos 00:04: registered as rtc0 May 9 00:39:52.896986 kernel: rtc_cmos 00:04: setting system clock to 2025-05-09T00:39:52 UTC (1746751192) May 9 00:39:52.897100 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 9 00:39:52.897110 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 9 00:39:52.897117 kernel: NET: Registered PF_INET6 protocol family May 9 00:39:52.897125 kernel: Segment Routing with IPv6 May 9 00:39:52.897132 kernel: In-situ OAM (IOAM) with IPv6 May 9 00:39:52.897144 kernel: NET: Registered PF_PACKET protocol family May 9 00:39:52.897151 kernel: Key type dns_resolver registered May 9 00:39:52.897158 kernel: IPI shorthand broadcast: enabled May 9 00:39:52.897166 kernel: sched_clock: Marking stable (596002221, 105940539)->(719352069, -17409309) May 9 00:39:52.897174 kernel: registered taskstats version 1 May 9 00:39:52.897181 kernel: Loading compiled-in X.509 certificates May 9 00:39:52.897189 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: fe5c896a3ca06bb89ebdfb7ed85f611806e4c1cc' May 9 00:39:52.897196 kernel: Key type .fscrypt registered May 9 00:39:52.897203 kernel: Key type fscrypt-provisioning registered May 9 00:39:52.897213 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 00:39:52.897221 kernel: ima: Allocated hash algorithm: sha1 May 9 00:39:52.897228 kernel: ima: No architecture policies found May 9 00:39:52.897236 kernel: clk: Disabling unused clocks May 9 00:39:52.897243 kernel: Freeing unused kernel image (initmem) memory: 42864K May 9 00:39:52.897251 kernel: Write protecting the kernel read-only data: 36864k May 9 00:39:52.897258 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 9 00:39:52.897266 kernel: Run /init as init process May 9 00:39:52.897273 kernel: with arguments: May 9 00:39:52.897283 kernel: /init May 9 00:39:52.897290 kernel: with environment: May 9 00:39:52.897297 kernel: HOME=/ May 9 00:39:52.897305 kernel: TERM=linux May 9 00:39:52.897312 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 00:39:52.897321 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:39:52.897331 systemd[1]: Detected virtualization kvm. May 9 00:39:52.897339 systemd[1]: Detected architecture x86-64. May 9 00:39:52.897349 systemd[1]: Running in initrd. May 9 00:39:52.897357 systemd[1]: No hostname configured, using default hostname. May 9 00:39:52.897365 systemd[1]: Hostname set to . May 9 00:39:52.897373 systemd[1]: Initializing machine ID from VM UUID. May 9 00:39:52.897392 systemd[1]: Queued start job for default target initrd.target. May 9 00:39:52.897401 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:39:52.897409 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:39:52.897417 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 00:39:52.897429 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:39:52.897449 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 00:39:52.897460 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 00:39:52.897470 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 00:39:52.897481 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 00:39:52.897489 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:39:52.897497 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:39:52.897505 systemd[1]: Reached target paths.target - Path Units. May 9 00:39:52.897514 systemd[1]: Reached target slices.target - Slice Units. May 9 00:39:52.897522 systemd[1]: Reached target swap.target - Swaps. May 9 00:39:52.897530 systemd[1]: Reached target timers.target - Timer Units. May 9 00:39:52.897538 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:39:52.897546 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:39:52.897557 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 00:39:52.897565 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 00:39:52.897574 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:39:52.897582 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:39:52.897593 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:39:52.897604 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:39:52.897616 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 00:39:52.897627 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:39:52.897641 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 00:39:52.897652 systemd[1]: Starting systemd-fsck-usr.service... May 9 00:39:52.897664 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:39:52.897672 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:39:52.897680 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:39:52.897689 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 00:39:52.897697 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:39:52.897705 systemd[1]: Finished systemd-fsck-usr.service. May 9 00:39:52.897735 systemd-journald[193]: Collecting audit messages is disabled. May 9 00:39:52.897755 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 00:39:52.897765 systemd-journald[193]: Journal started May 9 00:39:52.897785 systemd-journald[193]: Runtime Journal (/run/log/journal/1a8be440eb5f418da7f6c5dc2caef3a5) is 6.0M, max 48.4M, 42.3M free. May 9 00:39:52.877536 systemd-modules-load[194]: Inserted module 'overlay' May 9 00:39:52.915591 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 00:39:52.915610 kernel: Bridge firewalling registered May 9 00:39:52.904213 systemd-modules-load[194]: Inserted module 'br_netfilter' May 9 00:39:52.918289 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:39:52.918768 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:39:52.921128 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:39:52.923524 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:39:52.948499 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:39:52.951622 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:39:52.954173 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:39:52.959198 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:39:52.965192 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:39:52.970234 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:39:52.971731 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:39:52.986522 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 00:39:52.988738 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:39:52.993105 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:39:52.997751 dracut-cmdline[227]: dracut-dracut-053 May 9 00:39:53.000842 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56b660b06ded103a15fe25ebfbdecb898a20f374e429fec465c69b1a75d59c4b May 9 00:39:53.029836 systemd-resolved[233]: Positive Trust Anchors: May 9 00:39:53.029851 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:39:53.029882 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:39:53.032323 systemd-resolved[233]: Defaulting to hostname 'linux'. May 9 00:39:53.033371 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:39:53.038678 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:39:53.097413 kernel: SCSI subsystem initialized May 9 00:39:53.106407 kernel: Loading iSCSI transport class v2.0-870. May 9 00:39:53.117409 kernel: iscsi: registered transport (tcp) May 9 00:39:53.137691 kernel: iscsi: registered transport (qla4xxx) May 9 00:39:53.137715 kernel: QLogic iSCSI HBA Driver May 9 00:39:53.186740 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 00:39:53.195595 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 00:39:53.218408 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 00:39:53.218437 kernel: device-mapper: uevent: version 1.0.3 May 9 00:39:53.219954 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 00:39:53.260408 kernel: raid6: avx2x4 gen() 30354 MB/s May 9 00:39:53.277404 kernel: raid6: avx2x2 gen() 30830 MB/s May 9 00:39:53.294487 kernel: raid6: avx2x1 gen() 25888 MB/s May 9 00:39:53.294523 kernel: raid6: using algorithm avx2x2 gen() 30830 MB/s May 9 00:39:53.312492 kernel: raid6: .... xor() 19935 MB/s, rmw enabled May 9 00:39:53.312523 kernel: raid6: using avx2x2 recovery algorithm May 9 00:39:53.332418 kernel: xor: automatically using best checksumming function avx May 9 00:39:53.485413 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 00:39:53.499320 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 00:39:53.511516 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:39:53.525502 systemd-udevd[412]: Using default interface naming scheme 'v255'. May 9 00:39:53.530068 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:39:53.535502 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 00:39:53.551715 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation May 9 00:39:53.583251 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:39:53.595597 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:39:53.656114 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:39:53.664529 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 00:39:53.676093 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 00:39:53.679054 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:39:53.681898 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:39:53.684228 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:39:53.695560 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 00:39:53.702456 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 9 00:39:53.702692 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 9 00:39:53.708212 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 00:39:53.708236 kernel: GPT:9289727 != 19775487 May 9 00:39:53.708262 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 00:39:53.708272 kernel: GPT:9289727 != 19775487 May 9 00:39:53.708282 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 00:39:53.708292 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:39:53.707651 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 00:39:53.711680 kernel: cryptd: max_cpu_qlen set to 1000 May 9 00:39:53.716438 kernel: libata version 3.00 loaded. May 9 00:39:53.720714 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:39:53.720799 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:39:53.726505 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:39:53.729501 kernel: AVX2 version of gcm_enc/dec engaged. May 9 00:39:53.729518 kernel: AES CTR mode by8 optimization enabled May 9 00:39:53.727127 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:39:53.727196 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:39:53.731879 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:39:53.732401 kernel: ahci 0000:00:1f.2: version 3.0 May 9 00:39:53.734414 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 9 00:39:53.739433 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 9 00:39:53.739648 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 9 00:39:53.744770 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:39:53.747740 kernel: scsi host0: ahci May 9 00:39:53.747943 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (461) May 9 00:39:53.753464 kernel: BTRFS: device fsid 8d57db23-a0fc-4362-9769-38fbda5747c1 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (458) May 9 00:39:53.753491 kernel: scsi host1: ahci May 9 00:39:53.756156 kernel: scsi host2: ahci May 9 00:39:53.760407 kernel: scsi host3: ahci May 9 00:39:53.764411 kernel: scsi host4: ahci May 9 00:39:53.768529 kernel: scsi host5: ahci May 9 00:39:53.768911 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 9 00:39:53.768923 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 9 00:39:53.768934 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 9 00:39:53.768944 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 9 00:39:53.768954 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 9 00:39:53.768969 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 9 00:39:53.775010 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 9 00:39:53.805541 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 9 00:39:53.808168 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:39:53.813839 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 9 00:39:53.813937 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 9 00:39:53.818792 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 00:39:53.829575 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 00:39:53.832727 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:39:53.839044 disk-uuid[563]: Primary Header is updated. May 9 00:39:53.839044 disk-uuid[563]: Secondary Entries is updated. May 9 00:39:53.839044 disk-uuid[563]: Secondary Header is updated. May 9 00:39:53.843409 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:39:53.847549 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:39:53.854621 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:39:54.076409 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 9 00:39:54.076513 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 9 00:39:54.076524 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 9 00:39:54.077408 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 9 00:39:54.078413 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 9 00:39:54.078427 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 9 00:39:54.079653 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 9 00:39:54.079665 kernel: ata3.00: applying bridge limits May 9 00:39:54.080699 kernel: ata3.00: configured for UDMA/100 May 9 00:39:54.081929 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 9 00:39:54.121969 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 9 00:39:54.122182 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 9 00:39:54.134409 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 9 00:39:54.849404 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:39:54.849670 disk-uuid[566]: The operation has completed successfully. May 9 00:39:54.877107 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 00:39:54.877245 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 00:39:54.904510 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 00:39:54.907998 sh[590]: Success May 9 00:39:54.921408 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 9 00:39:54.956815 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 00:39:54.965821 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 00:39:54.970374 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 00:39:54.981430 kernel: BTRFS info (device dm-0): first mount of filesystem 8d57db23-a0fc-4362-9769-38fbda5747c1 May 9 00:39:54.981459 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 9 00:39:54.981471 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 00:39:54.982697 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 00:39:54.984404 kernel: BTRFS info (device dm-0): using free space tree May 9 00:39:54.988672 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 00:39:54.991042 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 00:39:55.004511 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 00:39:55.007064 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 00:39:55.015948 kernel: BTRFS info (device vda6): first mount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:39:55.015981 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:39:55.015992 kernel: BTRFS info (device vda6): using free space tree May 9 00:39:55.019402 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:39:55.028034 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 00:39:55.029738 kernel: BTRFS info (device vda6): last unmount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:39:55.038800 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 00:39:55.048634 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 00:39:55.098151 ignition[683]: Ignition 2.19.0 May 9 00:39:55.098165 ignition[683]: Stage: fetch-offline May 9 00:39:55.098203 ignition[683]: no configs at "/usr/lib/ignition/base.d" May 9 00:39:55.098213 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:39:55.098308 ignition[683]: parsed url from cmdline: "" May 9 00:39:55.098312 ignition[683]: no config URL provided May 9 00:39:55.098317 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" May 9 00:39:55.098326 ignition[683]: no config at "/usr/lib/ignition/user.ign" May 9 00:39:55.098355 ignition[683]: op(1): [started] loading QEMU firmware config module May 9 00:39:55.098360 ignition[683]: op(1): executing: "modprobe" "qemu_fw_cfg" May 9 00:39:55.108234 ignition[683]: op(1): [finished] loading QEMU firmware config module May 9 00:39:55.108269 ignition[683]: QEMU firmware config was not found. Ignoring... May 9 00:39:55.110703 ignition[683]: parsing config with SHA512: f4cf0fae930efeedc607d13f2abbdc16dabb6946dd1cad5fb527ea79c6bd01cadeeef8e7bdf2a4e4b048c67e73ae345a4dd4b47fd8ae20e74c2394ffbfb21305 May 9 00:39:55.113356 unknown[683]: fetched base config from "system" May 9 00:39:55.113370 unknown[683]: fetched user config from "qemu" May 9 00:39:55.113625 ignition[683]: fetch-offline: fetch-offline passed May 9 00:39:55.113687 ignition[683]: Ignition finished successfully May 9 00:39:55.116530 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:39:55.128843 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:39:55.140514 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:39:55.162285 systemd-networkd[779]: lo: Link UP May 9 00:39:55.162296 systemd-networkd[779]: lo: Gained carrier May 9 00:39:55.165178 systemd-networkd[779]: Enumeration completed May 9 00:39:55.165268 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:39:55.167179 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:39:55.167189 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:39:55.168011 systemd-networkd[779]: eth0: Link UP May 9 00:39:55.168015 systemd-networkd[779]: eth0: Gained carrier May 9 00:39:55.168021 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:39:55.168507 systemd[1]: Reached target network.target - Network. May 9 00:39:55.171450 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 9 00:39:55.180504 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 00:39:55.186427 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.160/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 00:39:55.200292 ignition[782]: Ignition 2.19.0 May 9 00:39:55.200303 ignition[782]: Stage: kargs May 9 00:39:55.201157 ignition[782]: no configs at "/usr/lib/ignition/base.d" May 9 00:39:55.201171 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:39:55.204224 ignition[782]: kargs: kargs passed May 9 00:39:55.204277 ignition[782]: Ignition finished successfully May 9 00:39:55.208548 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 00:39:55.226499 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 00:39:55.239636 ignition[791]: Ignition 2.19.0 May 9 00:39:55.239648 ignition[791]: Stage: disks May 9 00:39:55.239823 ignition[791]: no configs at "/usr/lib/ignition/base.d" May 9 00:39:55.239834 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:39:55.240526 ignition[791]: disks: disks passed May 9 00:39:55.240567 ignition[791]: Ignition finished successfully May 9 00:39:55.246125 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 00:39:55.248272 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 00:39:55.248347 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 00:39:55.250512 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:39:55.252869 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:39:55.254771 systemd[1]: Reached target basic.target - Basic System. May 9 00:39:55.266565 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 00:39:55.280722 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 9 00:39:55.286640 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 00:39:55.293463 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 00:39:55.380272 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 00:39:55.383232 kernel: EXT4-fs (vda9): mounted filesystem 4cb03022-f5a4-4664-b5b4-bc39fcc2f946 r/w with ordered data mode. Quota mode: none. May 9 00:39:55.381697 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 00:39:55.391453 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:39:55.393070 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 00:39:55.394470 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 00:39:55.403196 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (810) May 9 00:39:55.403213 kernel: BTRFS info (device vda6): first mount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:39:55.403224 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:39:55.403235 kernel: BTRFS info (device vda6): using free space tree May 9 00:39:55.403246 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:39:55.394504 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 00:39:55.394526 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:39:55.401582 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 00:39:55.404510 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 00:39:55.407972 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:39:55.440694 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory May 9 00:39:55.445559 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory May 9 00:39:55.449366 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory May 9 00:39:55.454182 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory May 9 00:39:55.537102 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 00:39:55.557472 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 00:39:55.560594 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 00:39:55.565440 kernel: BTRFS info (device vda6): last unmount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:39:55.584606 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 00:39:55.586787 ignition[925]: INFO : Ignition 2.19.0 May 9 00:39:55.586787 ignition[925]: INFO : Stage: mount May 9 00:39:55.586787 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:39:55.586787 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:39:55.590779 ignition[925]: INFO : mount: mount passed May 9 00:39:55.590779 ignition[925]: INFO : Ignition finished successfully May 9 00:39:55.589904 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 00:39:55.605541 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 00:39:55.980195 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 00:39:55.996555 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:39:56.002413 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (939) May 9 00:39:56.004941 kernel: BTRFS info (device vda6): first mount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:39:56.004963 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:39:56.004974 kernel: BTRFS info (device vda6): using free space tree May 9 00:39:56.007408 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:39:56.008502 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:39:56.030394 ignition[956]: INFO : Ignition 2.19.0 May 9 00:39:56.030394 ignition[956]: INFO : Stage: files May 9 00:39:56.032091 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:39:56.032091 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:39:56.032091 ignition[956]: DEBUG : files: compiled without relabeling support, skipping May 9 00:39:56.032091 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 00:39:56.032091 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 00:39:56.038687 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 00:39:56.038687 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 00:39:56.038687 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 00:39:56.038687 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 9 00:39:56.038687 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 9 00:39:56.038687 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:39:56.038687 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:39:56.038687 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 9 00:39:56.038687 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 9 00:39:56.038687 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 9 00:39:56.038687 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 9 00:39:56.034489 unknown[956]: wrote ssh authorized keys file for user: core May 9 00:39:56.427849 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 9 00:39:56.716579 systemd-networkd[779]: eth0: Gained IPv6LL May 9 00:39:56.854754 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 9 00:39:56.856884 ignition[956]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" May 9 00:39:56.858472 ignition[956]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 00:39:56.860776 ignition[956]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 00:39:56.860776 ignition[956]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" May 9 00:39:56.860776 ignition[956]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" May 9 00:39:56.883337 ignition[956]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" May 9 00:39:56.887464 ignition[956]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 9 00:39:56.889095 ignition[956]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" May 9 00:39:56.890614 ignition[956]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 00:39:56.892369 ignition[956]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 00:39:56.892369 ignition[956]: INFO : files: files passed May 9 00:39:56.894820 ignition[956]: INFO : Ignition finished successfully May 9 00:39:56.896513 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 00:39:56.909506 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 00:39:56.912403 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 00:39:56.915119 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 00:39:56.916122 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 00:39:56.922034 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory May 9 00:39:56.926103 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:39:56.927790 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 00:39:56.929328 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:39:56.932550 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:39:56.932809 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 00:39:56.947514 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 00:39:56.971595 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 00:39:56.972730 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 00:39:56.975412 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 00:39:56.977632 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 00:39:56.979724 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 00:39:56.993503 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 00:39:57.008199 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:39:57.011880 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 00:39:57.024463 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 00:39:57.026890 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:39:57.029322 systemd[1]: Stopped target timers.target - Timer Units. May 9 00:39:57.031207 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 00:39:57.032234 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:39:57.034849 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 00:39:57.036983 systemd[1]: Stopped target basic.target - Basic System. May 9 00:39:57.038874 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 00:39:57.041126 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:39:57.043492 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 00:39:57.045786 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 00:39:57.047922 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:39:57.050484 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 00:39:57.052642 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 00:39:57.054750 systemd[1]: Stopped target swap.target - Swaps. May 9 00:39:57.056422 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 00:39:57.057456 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 00:39:57.059763 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 00:39:57.061996 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:39:57.064376 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 00:39:57.065338 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:39:57.067933 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 00:39:57.068950 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 00:39:57.071185 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 00:39:57.072273 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:39:57.074650 systemd[1]: Stopped target paths.target - Path Units. May 9 00:39:57.076436 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 00:39:57.077565 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:39:57.080487 systemd[1]: Stopped target slices.target - Slice Units. May 9 00:39:57.082469 systemd[1]: Stopped target sockets.target - Socket Units. May 9 00:39:57.084425 systemd[1]: iscsid.socket: Deactivated successfully. May 9 00:39:57.085327 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:39:57.087355 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 00:39:57.088282 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:39:57.090482 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 00:39:57.091706 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:39:57.094314 systemd[1]: ignition-files.service: Deactivated successfully. May 9 00:39:57.095336 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 00:39:57.114540 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 00:39:57.116493 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 00:39:57.117565 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:39:57.120761 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 00:39:57.121721 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 00:39:57.123789 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:39:57.126005 ignition[1011]: INFO : Ignition 2.19.0 May 9 00:39:57.126005 ignition[1011]: INFO : Stage: umount May 9 00:39:57.127671 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:39:57.127671 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:39:57.127671 ignition[1011]: INFO : umount: umount passed May 9 00:39:57.127671 ignition[1011]: INFO : Ignition finished successfully May 9 00:39:57.126965 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 00:39:57.128417 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:39:57.134335 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 00:39:57.134462 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 00:39:57.136540 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 00:39:57.136647 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 00:39:57.139245 systemd[1]: Stopped target network.target - Network. May 9 00:39:57.140570 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 00:39:57.140629 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 00:39:57.141812 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 00:39:57.141860 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 00:39:57.143532 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 00:39:57.143577 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 00:39:57.145634 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 00:39:57.145681 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 00:39:57.146149 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 00:39:57.150448 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 00:39:57.152430 systemd-networkd[779]: eth0: DHCPv6 lease lost May 9 00:39:57.155144 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 00:39:57.155295 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 00:39:57.156988 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 00:39:57.157027 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 00:39:57.163453 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 00:39:57.164101 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 00:39:57.164154 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:39:57.166501 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:39:57.171887 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 00:39:57.172004 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 00:39:57.181648 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 00:39:57.181720 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 00:39:57.182878 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 00:39:57.182929 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 00:39:57.183922 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 00:39:57.183974 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:39:57.188014 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 00:39:57.188191 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:39:57.190339 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 00:39:57.190467 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 00:39:57.192124 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 00:39:57.192183 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 00:39:57.193030 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 00:39:57.193069 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:39:57.193341 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 00:39:57.193403 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 00:39:57.194230 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 00:39:57.194275 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 00:39:57.195059 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:39:57.195103 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:39:57.196265 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 00:39:57.204597 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 00:39:57.204651 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:39:57.204979 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:39:57.205025 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:39:57.211824 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 00:39:57.211929 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 00:39:57.238420 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 00:39:57.396977 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 00:39:57.397097 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 00:39:57.399136 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 00:39:57.400922 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 00:39:57.400974 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 00:39:57.415499 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 00:39:57.422147 systemd[1]: Switching root. May 9 00:39:57.457724 systemd-journald[193]: Journal stopped May 9 00:39:58.512094 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). May 9 00:39:58.512160 kernel: SELinux: policy capability network_peer_controls=1 May 9 00:39:58.512178 kernel: SELinux: policy capability open_perms=1 May 9 00:39:58.512190 kernel: SELinux: policy capability extended_socket_class=1 May 9 00:39:58.512207 kernel: SELinux: policy capability always_check_network=0 May 9 00:39:58.512219 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 00:39:58.512230 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 00:39:58.512241 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 00:39:58.512256 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 00:39:58.512268 kernel: audit: type=1403 audit(1746751197.808:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 00:39:58.512284 systemd[1]: Successfully loaded SELinux policy in 48.622ms. May 9 00:39:58.512304 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.314ms. May 9 00:39:58.512317 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:39:58.512329 systemd[1]: Detected virtualization kvm. May 9 00:39:58.512341 systemd[1]: Detected architecture x86-64. May 9 00:39:58.512353 systemd[1]: Detected first boot. May 9 00:39:58.512365 systemd[1]: Initializing machine ID from VM UUID. May 9 00:39:58.512394 zram_generator::config[1054]: No configuration found. May 9 00:39:58.512408 systemd[1]: Populated /etc with preset unit settings. May 9 00:39:58.512420 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 9 00:39:58.512432 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 9 00:39:58.512446 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 9 00:39:58.512461 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 00:39:58.512473 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 00:39:58.512485 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 00:39:58.512497 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 00:39:58.512509 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 00:39:58.512521 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 00:39:58.512534 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 00:39:58.512546 systemd[1]: Created slice user.slice - User and Session Slice. May 9 00:39:58.512558 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:39:58.512574 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:39:58.512586 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 00:39:58.512598 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 00:39:58.512610 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 00:39:58.512622 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:39:58.512634 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 9 00:39:58.512646 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:39:58.512658 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 9 00:39:58.512670 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 9 00:39:58.512687 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 9 00:39:58.512703 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 00:39:58.512717 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:39:58.512729 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:39:58.512741 systemd[1]: Reached target slices.target - Slice Units. May 9 00:39:58.512753 systemd[1]: Reached target swap.target - Swaps. May 9 00:39:58.512771 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 00:39:58.512786 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 00:39:58.512799 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:39:58.512810 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:39:58.512822 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:39:58.512835 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 00:39:58.512848 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 00:39:58.512859 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 00:39:58.512872 systemd[1]: Mounting media.mount - External Media Directory... May 9 00:39:58.512883 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:39:58.512898 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 00:39:58.512910 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 00:39:58.512922 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 00:39:58.512934 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 00:39:58.512946 systemd[1]: Reached target machines.target - Containers. May 9 00:39:58.512958 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 00:39:58.512970 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:39:58.512982 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:39:58.513458 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 00:39:58.513478 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:39:58.513490 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:39:58.513503 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:39:58.513515 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 00:39:58.513526 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:39:58.513538 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 00:39:58.513551 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 9 00:39:58.513563 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 9 00:39:58.513578 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 9 00:39:58.513590 systemd[1]: Stopped systemd-fsck-usr.service. May 9 00:39:58.513602 kernel: fuse: init (API version 7.39) May 9 00:39:58.513616 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:39:58.513628 kernel: loop: module loaded May 9 00:39:58.513639 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:39:58.513651 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 00:39:58.513663 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 00:39:58.513675 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:39:58.513689 systemd[1]: verity-setup.service: Deactivated successfully. May 9 00:39:58.513701 systemd[1]: Stopped verity-setup.service. May 9 00:39:58.513731 systemd-journald[1124]: Collecting audit messages is disabled. May 9 00:39:58.513752 kernel: ACPI: bus type drm_connector registered May 9 00:39:58.513772 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:39:58.513784 systemd-journald[1124]: Journal started May 9 00:39:58.513808 systemd-journald[1124]: Runtime Journal (/run/log/journal/1a8be440eb5f418da7f6c5dc2caef3a5) is 6.0M, max 48.4M, 42.3M free. May 9 00:39:58.301031 systemd[1]: Queued start job for default target multi-user.target. May 9 00:39:58.315995 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 9 00:39:58.316432 systemd[1]: systemd-journald.service: Deactivated successfully. May 9 00:39:58.516418 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:39:58.518117 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 00:39:58.519417 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 00:39:58.520697 systemd[1]: Mounted media.mount - External Media Directory. May 9 00:39:58.521891 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 00:39:58.523244 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 00:39:58.524719 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 00:39:58.526032 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 00:39:58.527539 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:39:58.529096 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 00:39:58.529269 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 00:39:58.530736 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:39:58.530912 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:39:58.532332 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:39:58.532536 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:39:58.534030 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:39:58.534234 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:39:58.535903 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 00:39:58.536081 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 00:39:58.537705 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:39:58.537892 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:39:58.539297 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:39:58.540708 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 00:39:58.542291 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 00:39:58.556078 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 00:39:58.564458 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 00:39:58.566676 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 00:39:58.568057 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 00:39:58.568086 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:39:58.570238 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 9 00:39:58.572778 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 00:39:58.579017 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 00:39:58.580221 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:39:58.582305 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 00:39:58.586050 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 00:39:58.587266 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:39:58.588294 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 00:39:58.591475 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:39:58.595601 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:39:58.599003 systemd-journald[1124]: Time spent on flushing to /var/log/journal/1a8be440eb5f418da7f6c5dc2caef3a5 is 17.746ms for 932 entries. May 9 00:39:58.599003 systemd-journald[1124]: System Journal (/var/log/journal/1a8be440eb5f418da7f6c5dc2caef3a5) is 8.0M, max 195.6M, 187.6M free. May 9 00:39:58.636421 systemd-journald[1124]: Received client request to flush runtime journal. May 9 00:39:58.636457 kernel: loop0: detected capacity change from 0 to 142488 May 9 00:39:58.600669 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 00:39:58.604528 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 00:39:58.608288 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 00:39:58.609716 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 00:39:58.611442 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 00:39:58.615794 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 00:39:58.622497 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 00:39:58.633617 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 9 00:39:58.639999 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 00:39:58.656615 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:39:58.659071 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:39:58.665530 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 00:39:58.672655 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 00:39:58.675471 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 00:39:58.676418 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 00:39:58.678365 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 9 00:39:58.691053 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:39:58.692796 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 9 00:39:58.696399 kernel: loop1: detected capacity change from 0 to 140768 May 9 00:39:58.711782 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. May 9 00:39:58.711802 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. May 9 00:39:58.719102 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:39:58.729423 kernel: loop2: detected capacity change from 0 to 218376 May 9 00:39:58.764424 kernel: loop3: detected capacity change from 0 to 142488 May 9 00:39:58.775405 kernel: loop4: detected capacity change from 0 to 140768 May 9 00:39:58.789421 kernel: loop5: detected capacity change from 0 to 218376 May 9 00:39:58.796479 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 9 00:39:58.797067 (sd-merge)[1193]: Merged extensions into '/usr'. May 9 00:39:58.801601 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... May 9 00:39:58.801619 systemd[1]: Reloading... May 9 00:39:58.856063 zram_generator::config[1218]: No configuration found. May 9 00:39:58.915873 ldconfig[1163]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 00:39:58.972804 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:39:59.020940 systemd[1]: Reloading finished in 218 ms. May 9 00:39:59.058281 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 00:39:59.059901 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 00:39:59.076536 systemd[1]: Starting ensure-sysext.service... May 9 00:39:59.078460 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:39:59.085809 systemd[1]: Reloading requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... May 9 00:39:59.085818 systemd[1]: Reloading... May 9 00:39:59.128332 zram_generator::config[1286]: No configuration found. May 9 00:39:59.128086 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 00:39:59.128499 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 00:39:59.129489 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 00:39:59.129794 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. May 9 00:39:59.129873 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. May 9 00:39:59.134457 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:39:59.134468 systemd-tmpfiles[1257]: Skipping /boot May 9 00:39:59.145280 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:39:59.145296 systemd-tmpfiles[1257]: Skipping /boot May 9 00:39:59.234141 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:39:59.282977 systemd[1]: Reloading finished in 196 ms. May 9 00:39:59.302228 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 00:39:59.316836 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:39:59.325332 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 00:39:59.327926 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 00:39:59.330365 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 00:39:59.335313 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:39:59.342596 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:39:59.346490 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 00:39:59.349918 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:39:59.350087 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:39:59.352926 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:39:59.355162 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:39:59.359641 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:39:59.361649 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:39:59.370798 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 00:39:59.372483 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:39:59.374079 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 00:39:59.376882 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:39:59.377085 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:39:59.379771 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:39:59.379963 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:39:59.381289 augenrules[1347]: No rules May 9 00:39:59.381220 systemd-udevd[1333]: Using default interface naming scheme 'v255'. May 9 00:39:59.382024 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 00:39:59.383771 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:39:59.383957 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:39:59.394672 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 00:39:59.397920 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:39:59.398175 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:39:59.405064 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:39:59.409595 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:39:59.416449 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:39:59.417678 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:39:59.419678 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 00:39:59.420868 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:39:59.423697 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:39:59.425348 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 00:39:59.427058 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 00:39:59.435645 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:39:59.436290 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:39:59.438743 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:39:59.439040 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:39:59.442085 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:39:59.442286 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:39:59.460838 systemd[1]: Finished ensure-sysext.service. May 9 00:39:59.468411 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1363) May 9 00:39:59.465414 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 00:39:59.473299 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 9 00:39:59.476225 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:39:59.476364 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:39:59.483581 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:39:59.486560 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:39:59.497888 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:39:59.502533 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:39:59.503689 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:39:59.508545 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:39:59.511575 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 9 00:39:59.515432 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 00:39:59.515461 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:39:59.516052 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:39:59.516234 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:39:59.517796 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:39:59.517967 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:39:59.519564 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:39:59.519797 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:39:59.521457 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:39:59.521619 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:39:59.531716 systemd-resolved[1326]: Positive Trust Anchors: May 9 00:39:59.539884 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 9 00:39:59.531871 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:39:59.531901 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:39:59.536452 systemd-resolved[1326]: Defaulting to hostname 'linux'. May 9 00:39:59.539758 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:39:59.547431 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 00:39:59.549924 kernel: ACPI: button: Power Button [PWRF] May 9 00:39:59.550126 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:39:59.558549 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 00:39:59.559869 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:39:59.559951 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:39:59.561414 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 9 00:39:59.561684 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 9 00:39:59.563027 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 9 00:39:59.592082 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 00:39:59.616559 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 9 00:39:59.637201 systemd-networkd[1403]: lo: Link UP May 9 00:39:59.663257 kernel: mousedev: PS/2 mouse device common for all mice May 9 00:39:59.637417 systemd-networkd[1403]: lo: Gained carrier May 9 00:39:59.639008 systemd-networkd[1403]: Enumeration completed May 9 00:39:59.663414 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:39:59.663936 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:39:59.663940 systemd-networkd[1403]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:39:59.665040 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:39:59.666453 systemd-networkd[1403]: eth0: Link UP May 9 00:39:59.666458 systemd-networkd[1403]: eth0: Gained carrier May 9 00:39:59.666480 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:39:59.666565 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 9 00:39:59.668178 systemd[1]: Reached target network.target - Network. May 9 00:39:59.669225 systemd[1]: Reached target time-set.target - System Time Set. May 9 00:39:59.672648 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 00:39:59.679024 kernel: kvm_amd: TSC scaling supported May 9 00:39:59.679067 kernel: kvm_amd: Nested Virtualization enabled May 9 00:39:59.679081 kernel: kvm_amd: Nested Paging enabled May 9 00:39:59.679093 kernel: kvm_amd: LBR virtualization supported May 9 00:39:59.679105 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 9 00:39:59.679126 kernel: kvm_amd: Virtual GIF supported May 9 00:39:59.682647 systemd-networkd[1403]: eth0: DHCPv4 address 10.0.0.160/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 00:39:59.684256 systemd-timesyncd[1404]: Network configuration changed, trying to establish connection. May 9 00:40:00.332682 systemd-timesyncd[1404]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 9 00:40:00.332867 systemd-resolved[1326]: Clock change detected. Flushing caches. May 9 00:40:00.332955 systemd-timesyncd[1404]: Initial clock synchronization to Fri 2025-05-09 00:40:00.332408 UTC. May 9 00:40:00.346989 kernel: EDAC MC: Ver: 3.0.0 May 9 00:40:00.374506 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 00:40:00.416159 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 00:40:00.417888 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:40:00.425732 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:40:00.465083 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 9 00:40:00.466643 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:40:00.467768 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:40:00.468963 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 00:40:00.470227 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 00:40:00.471681 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 00:40:00.472849 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 00:40:00.474093 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 00:40:00.475329 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 00:40:00.475354 systemd[1]: Reached target paths.target - Path Units. May 9 00:40:00.476248 systemd[1]: Reached target timers.target - Timer Units. May 9 00:40:00.478020 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 00:40:00.480543 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 00:40:00.490430 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 00:40:00.492893 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 9 00:40:00.494506 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 00:40:00.495720 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:40:00.496717 systemd[1]: Reached target basic.target - Basic System. May 9 00:40:00.497736 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 00:40:00.497758 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 00:40:00.498740 systemd[1]: Starting containerd.service - containerd container runtime... May 9 00:40:00.500905 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 00:40:00.504636 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:40:00.504556 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 00:40:00.508424 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 00:40:00.510539 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 00:40:00.511920 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 00:40:00.515682 jq[1433]: false May 9 00:40:00.517468 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 00:40:00.520096 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 00:40:00.524253 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 00:40:00.525732 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 00:40:00.526297 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 00:40:00.529350 extend-filesystems[1434]: Found loop3 May 9 00:40:00.529350 extend-filesystems[1434]: Found loop4 May 9 00:40:00.529350 extend-filesystems[1434]: Found loop5 May 9 00:40:00.529350 extend-filesystems[1434]: Found sr0 May 9 00:40:00.529350 extend-filesystems[1434]: Found vda May 9 00:40:00.529350 extend-filesystems[1434]: Found vda1 May 9 00:40:00.529350 extend-filesystems[1434]: Found vda2 May 9 00:40:00.529350 extend-filesystems[1434]: Found vda3 May 9 00:40:00.529350 extend-filesystems[1434]: Found usr May 9 00:40:00.529350 extend-filesystems[1434]: Found vda4 May 9 00:40:00.529350 extend-filesystems[1434]: Found vda6 May 9 00:40:00.529350 extend-filesystems[1434]: Found vda7 May 9 00:40:00.529350 extend-filesystems[1434]: Found vda9 May 9 00:40:00.529082 systemd[1]: Starting update-engine.service - Update Engine... May 9 00:40:00.547752 dbus-daemon[1432]: [system] SELinux support is enabled May 9 00:40:00.549823 extend-filesystems[1434]: Checking size of /dev/vda9 May 9 00:40:00.531325 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 00:40:00.553008 extend-filesystems[1434]: Resized partition /dev/vda9 May 9 00:40:00.535999 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 9 00:40:00.554237 jq[1445]: true May 9 00:40:00.541424 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 00:40:00.541641 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 00:40:00.542815 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 00:40:00.543044 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 00:40:00.549009 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 00:40:00.556953 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1361) May 9 00:40:00.563962 extend-filesystems[1457]: resize2fs 1.47.1 (20-May-2024) May 9 00:40:00.566417 update_engine[1442]: I20250509 00:40:00.566346 1442 main.cc:92] Flatcar Update Engine starting May 9 00:40:00.566815 jq[1455]: true May 9 00:40:00.567060 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 9 00:40:00.567838 update_engine[1442]: I20250509 00:40:00.567807 1442 update_check_scheduler.cc:74] Next update check in 7m25s May 9 00:40:00.568558 systemd[1]: motdgen.service: Deactivated successfully. May 9 00:40:00.569407 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 00:40:00.569625 (ntainerd)[1456]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 00:40:00.584471 systemd[1]: Started update-engine.service - Update Engine. May 9 00:40:00.590088 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 00:40:00.590210 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 00:40:00.592317 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 00:40:00.592346 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 00:40:00.600091 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 00:40:00.602984 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 9 00:40:00.629162 locksmithd[1472]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 00:40:00.631454 systemd-logind[1441]: Watching system buttons on /dev/input/event1 (Power Button) May 9 00:40:00.631477 systemd-logind[1441]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 9 00:40:00.632869 systemd-logind[1441]: New seat seat0. May 9 00:40:00.633329 extend-filesystems[1457]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 9 00:40:00.633329 extend-filesystems[1457]: old_desc_blocks = 1, new_desc_blocks = 1 May 9 00:40:00.633329 extend-filesystems[1457]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 9 00:40:00.636629 extend-filesystems[1434]: Resized filesystem in /dev/vda9 May 9 00:40:00.634840 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 00:40:00.635082 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 00:40:00.641322 systemd[1]: Started systemd-logind.service - User Login Management. May 9 00:40:00.644390 bash[1482]: Updated "/home/core/.ssh/authorized_keys" May 9 00:40:00.644329 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 00:40:00.647650 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 9 00:40:00.755968 containerd[1456]: time="2025-05-09T00:40:00.755273727Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 9 00:40:00.777478 containerd[1456]: time="2025-05-09T00:40:00.777433092Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 9 00:40:00.779234 containerd[1456]: time="2025-05-09T00:40:00.779190959Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 9 00:40:00.779234 containerd[1456]: time="2025-05-09T00:40:00.779223470Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 9 00:40:00.779291 containerd[1456]: time="2025-05-09T00:40:00.779240642Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 9 00:40:00.779435 containerd[1456]: time="2025-05-09T00:40:00.779405221Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 9 00:40:00.779435 containerd[1456]: time="2025-05-09T00:40:00.779427172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 9 00:40:00.779512 containerd[1456]: time="2025-05-09T00:40:00.779490311Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:40:00.779512 containerd[1456]: time="2025-05-09T00:40:00.779508515Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 9 00:40:00.779725 containerd[1456]: time="2025-05-09T00:40:00.779691909Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:40:00.779725 containerd[1456]: time="2025-05-09T00:40:00.779713389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 9 00:40:00.779768 containerd[1456]: time="2025-05-09T00:40:00.779726223Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:40:00.779768 containerd[1456]: time="2025-05-09T00:40:00.779736613Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 9 00:40:00.779849 containerd[1456]: time="2025-05-09T00:40:00.779827273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 9 00:40:00.780112 containerd[1456]: time="2025-05-09T00:40:00.780080778Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 9 00:40:00.780228 containerd[1456]: time="2025-05-09T00:40:00.780198629Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:40:00.780228 containerd[1456]: time="2025-05-09T00:40:00.780215982Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 9 00:40:00.780347 containerd[1456]: time="2025-05-09T00:40:00.780318414Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 9 00:40:00.780397 containerd[1456]: time="2025-05-09T00:40:00.780376934Z" level=info msg="metadata content store policy set" policy=shared May 9 00:40:00.895950 containerd[1456]: time="2025-05-09T00:40:00.895909630Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 9 00:40:00.896044 containerd[1456]: time="2025-05-09T00:40:00.895971907Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 9 00:40:00.896044 containerd[1456]: time="2025-05-09T00:40:00.895988448Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 9 00:40:00.896044 containerd[1456]: time="2025-05-09T00:40:00.896003246Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 9 00:40:00.896044 containerd[1456]: time="2025-05-09T00:40:00.896021650Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 9 00:40:00.896197 containerd[1456]: time="2025-05-09T00:40:00.896140623Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 9 00:40:00.896403 containerd[1456]: time="2025-05-09T00:40:00.896375264Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 9 00:40:00.896513 containerd[1456]: time="2025-05-09T00:40:00.896483116Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 9 00:40:00.896513 containerd[1456]: time="2025-05-09T00:40:00.896503133Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 9 00:40:00.896570 containerd[1456]: time="2025-05-09T00:40:00.896514745Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 9 00:40:00.896570 containerd[1456]: time="2025-05-09T00:40:00.896527389Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 9 00:40:00.896570 containerd[1456]: time="2025-05-09T00:40:00.896538981Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 9 00:40:00.896570 containerd[1456]: time="2025-05-09T00:40:00.896550252Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 9 00:40:00.896570 containerd[1456]: time="2025-05-09T00:40:00.896562004Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 9 00:40:00.896687 containerd[1456]: time="2025-05-09T00:40:00.896577924Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 9 00:40:00.896687 containerd[1456]: time="2025-05-09T00:40:00.896590868Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 9 00:40:00.896687 containerd[1456]: time="2025-05-09T00:40:00.896605816Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 9 00:40:00.896687 containerd[1456]: time="2025-05-09T00:40:00.896616416Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 9 00:40:00.896687 containerd[1456]: time="2025-05-09T00:40:00.896633548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 9 00:40:00.896687 containerd[1456]: time="2025-05-09T00:40:00.896646302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 9 00:40:00.896687 containerd[1456]: time="2025-05-09T00:40:00.896659196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 9 00:40:00.896687 containerd[1456]: time="2025-05-09T00:40:00.896671439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 9 00:40:00.896687 containerd[1456]: time="2025-05-09T00:40:00.896682570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 9 00:40:00.896687 containerd[1456]: time="2025-05-09T00:40:00.896694352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 9 00:40:00.896953 containerd[1456]: time="2025-05-09T00:40:00.896711865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 9 00:40:00.896953 containerd[1456]: time="2025-05-09T00:40:00.896724048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 9 00:40:00.896953 containerd[1456]: time="2025-05-09T00:40:00.896735680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 9 00:40:00.896953 containerd[1456]: time="2025-05-09T00:40:00.896750357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 9 00:40:00.896953 containerd[1456]: time="2025-05-09T00:40:00.896760777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 9 00:40:00.896953 containerd[1456]: time="2025-05-09T00:40:00.896774573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 9 00:40:00.896953 containerd[1456]: time="2025-05-09T00:40:00.896786886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 9 00:40:00.896953 containerd[1456]: time="2025-05-09T00:40:00.896800992Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 9 00:40:00.896953 containerd[1456]: time="2025-05-09T00:40:00.896820178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 9 00:40:00.896953 containerd[1456]: time="2025-05-09T00:40:00.896831800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 9 00:40:00.896953 containerd[1456]: time="2025-05-09T00:40:00.896841789Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 9 00:40:00.896953 containerd[1456]: time="2025-05-09T00:40:00.896888576Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 9 00:40:00.896953 containerd[1456]: time="2025-05-09T00:40:00.896901941Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 9 00:40:00.896953 containerd[1456]: time="2025-05-09T00:40:00.896913062Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 9 00:40:00.897317 containerd[1456]: time="2025-05-09T00:40:00.896923993Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 9 00:40:00.897317 containerd[1456]: time="2025-05-09T00:40:00.896970330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 9 00:40:00.897317 containerd[1456]: time="2025-05-09T00:40:00.896983044Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 9 00:40:00.897317 containerd[1456]: time="2025-05-09T00:40:00.896998513Z" level=info msg="NRI interface is disabled by configuration." May 9 00:40:00.897317 containerd[1456]: time="2025-05-09T00:40:00.897017498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 9 00:40:00.897450 containerd[1456]: time="2025-05-09T00:40:00.897242220Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 9 00:40:00.897450 containerd[1456]: time="2025-05-09T00:40:00.897308694Z" level=info msg="Connect containerd service" May 9 00:40:00.897450 containerd[1456]: time="2025-05-09T00:40:00.897340945Z" level=info msg="using legacy CRI server" May 9 00:40:00.897450 containerd[1456]: time="2025-05-09T00:40:00.897348850Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 00:40:00.897450 containerd[1456]: time="2025-05-09T00:40:00.897433378Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 9 00:40:00.898019 containerd[1456]: time="2025-05-09T00:40:00.897992868Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 00:40:00.898525 containerd[1456]: time="2025-05-09T00:40:00.898151565Z" level=info msg="Start subscribing containerd event" May 9 00:40:00.898525 containerd[1456]: time="2025-05-09T00:40:00.898241294Z" level=info msg="Start recovering state" May 9 00:40:00.898525 containerd[1456]: time="2025-05-09T00:40:00.898309862Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 00:40:00.898525 containerd[1456]: time="2025-05-09T00:40:00.898336562Z" level=info msg="Start event monitor" May 9 00:40:00.898525 containerd[1456]: time="2025-05-09T00:40:00.898350458Z" level=info msg="Start snapshots syncer" May 9 00:40:00.898525 containerd[1456]: time="2025-05-09T00:40:00.898359796Z" level=info msg="Start cni network conf syncer for default" May 9 00:40:00.898525 containerd[1456]: time="2025-05-09T00:40:00.898368041Z" level=info msg="Start streaming server" May 9 00:40:00.898525 containerd[1456]: time="2025-05-09T00:40:00.898367110Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 00:40:00.898525 containerd[1456]: time="2025-05-09T00:40:00.898448192Z" level=info msg="containerd successfully booted in 0.144690s" May 9 00:40:00.898639 systemd[1]: Started containerd.service - containerd container runtime. May 9 00:40:00.942084 sshd_keygen[1467]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 00:40:00.967860 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 00:40:00.984190 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 00:40:00.992153 systemd[1]: issuegen.service: Deactivated successfully. May 9 00:40:00.992417 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 00:40:00.995237 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 00:40:01.010029 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 00:40:01.012679 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 00:40:01.014801 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 9 00:40:01.016063 systemd[1]: Reached target getty.target - Login Prompts. May 9 00:40:02.227138 systemd-networkd[1403]: eth0: Gained IPv6LL May 9 00:40:02.231392 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 00:40:02.233195 systemd[1]: Reached target network-online.target - Network is Online. May 9 00:40:02.245184 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 9 00:40:02.247837 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:40:02.250795 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 00:40:02.273157 systemd[1]: coreos-metadata.service: Deactivated successfully. May 9 00:40:02.273426 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 9 00:40:02.275123 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 00:40:02.277331 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 00:40:02.915567 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:40:02.917258 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 00:40:02.918521 systemd[1]: Startup finished in 726ms (kernel) + 5.109s (initrd) + 4.510s (userspace) = 10.346s. May 9 00:40:02.941332 (kubelet)[1537]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:40:03.329894 kubelet[1537]: E0509 00:40:03.329776 1537 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:40:03.333445 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:40:03.333652 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:40:06.366181 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 00:40:06.367360 systemd[1]: Started sshd@0-10.0.0.160:22-10.0.0.1:50022.service - OpenSSH per-connection server daemon (10.0.0.1:50022). May 9 00:40:06.407978 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 50022 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:40:06.409775 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:40:06.418431 systemd-logind[1441]: New session 1 of user core. May 9 00:40:06.419703 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 00:40:06.437135 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 00:40:06.448179 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 00:40:06.451044 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 00:40:06.458582 (systemd)[1554]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 00:40:06.568661 systemd[1554]: Queued start job for default target default.target. May 9 00:40:06.578210 systemd[1554]: Created slice app.slice - User Application Slice. May 9 00:40:06.578235 systemd[1554]: Reached target paths.target - Paths. May 9 00:40:06.578248 systemd[1554]: Reached target timers.target - Timers. May 9 00:40:06.579739 systemd[1554]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 00:40:06.591644 systemd[1554]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 00:40:06.591766 systemd[1554]: Reached target sockets.target - Sockets. May 9 00:40:06.591784 systemd[1554]: Reached target basic.target - Basic System. May 9 00:40:06.591818 systemd[1554]: Reached target default.target - Main User Target. May 9 00:40:06.591848 systemd[1554]: Startup finished in 127ms. May 9 00:40:06.592224 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 00:40:06.593696 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 00:40:06.662526 systemd[1]: Started sshd@1-10.0.0.160:22-10.0.0.1:50030.service - OpenSSH per-connection server daemon (10.0.0.1:50030). May 9 00:40:06.701711 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 50030 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:40:06.703142 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:40:06.706995 systemd-logind[1441]: New session 2 of user core. May 9 00:40:06.725041 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 00:40:06.778452 sshd[1565]: pam_unix(sshd:session): session closed for user core May 9 00:40:06.787564 systemd[1]: sshd@1-10.0.0.160:22-10.0.0.1:50030.service: Deactivated successfully. May 9 00:40:06.789248 systemd[1]: session-2.scope: Deactivated successfully. May 9 00:40:06.790762 systemd-logind[1441]: Session 2 logged out. Waiting for processes to exit. May 9 00:40:06.791988 systemd[1]: Started sshd@2-10.0.0.160:22-10.0.0.1:50038.service - OpenSSH per-connection server daemon (10.0.0.1:50038). May 9 00:40:06.792672 systemd-logind[1441]: Removed session 2. May 9 00:40:06.829941 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 50038 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:40:06.831388 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:40:06.834799 systemd-logind[1441]: New session 3 of user core. May 9 00:40:06.842030 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 00:40:06.890703 sshd[1572]: pam_unix(sshd:session): session closed for user core May 9 00:40:06.897540 systemd[1]: sshd@2-10.0.0.160:22-10.0.0.1:50038.service: Deactivated successfully. May 9 00:40:06.899173 systemd[1]: session-3.scope: Deactivated successfully. May 9 00:40:06.900638 systemd-logind[1441]: Session 3 logged out. Waiting for processes to exit. May 9 00:40:06.912162 systemd[1]: Started sshd@3-10.0.0.160:22-10.0.0.1:50050.service - OpenSSH per-connection server daemon (10.0.0.1:50050). May 9 00:40:06.913442 systemd-logind[1441]: Removed session 3. May 9 00:40:06.945495 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 50050 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:40:06.946877 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:40:06.950436 systemd-logind[1441]: New session 4 of user core. May 9 00:40:06.962033 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 00:40:07.016515 sshd[1579]: pam_unix(sshd:session): session closed for user core May 9 00:40:07.026561 systemd[1]: sshd@3-10.0.0.160:22-10.0.0.1:50050.service: Deactivated successfully. May 9 00:40:07.028351 systemd[1]: session-4.scope: Deactivated successfully. May 9 00:40:07.029969 systemd-logind[1441]: Session 4 logged out. Waiting for processes to exit. May 9 00:40:07.045197 systemd[1]: Started sshd@4-10.0.0.160:22-10.0.0.1:50066.service - OpenSSH per-connection server daemon (10.0.0.1:50066). May 9 00:40:07.046010 systemd-logind[1441]: Removed session 4. May 9 00:40:07.077355 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 50066 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:40:07.078783 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:40:07.082106 systemd-logind[1441]: New session 5 of user core. May 9 00:40:07.093056 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 00:40:07.149309 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 9 00:40:07.149622 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:40:07.168236 sudo[1590]: pam_unix(sudo:session): session closed for user root May 9 00:40:07.169869 sshd[1586]: pam_unix(sshd:session): session closed for user core May 9 00:40:07.183577 systemd[1]: sshd@4-10.0.0.160:22-10.0.0.1:50066.service: Deactivated successfully. May 9 00:40:07.185224 systemd[1]: session-5.scope: Deactivated successfully. May 9 00:40:07.186456 systemd-logind[1441]: Session 5 logged out. Waiting for processes to exit. May 9 00:40:07.187762 systemd[1]: Started sshd@5-10.0.0.160:22-10.0.0.1:50078.service - OpenSSH per-connection server daemon (10.0.0.1:50078). May 9 00:40:07.188479 systemd-logind[1441]: Removed session 5. May 9 00:40:07.225970 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 50078 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:40:07.227466 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:40:07.231147 systemd-logind[1441]: New session 6 of user core. May 9 00:40:07.245047 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 00:40:07.297696 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 9 00:40:07.298027 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:40:07.301145 sudo[1599]: pam_unix(sudo:session): session closed for user root May 9 00:40:07.307174 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 9 00:40:07.307509 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:40:07.336142 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 9 00:40:07.337733 auditctl[1602]: No rules May 9 00:40:07.338286 systemd[1]: audit-rules.service: Deactivated successfully. May 9 00:40:07.338526 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 9 00:40:07.341142 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 00:40:07.369762 augenrules[1620]: No rules May 9 00:40:07.371513 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 00:40:07.372805 sudo[1598]: pam_unix(sudo:session): session closed for user root May 9 00:40:07.374627 sshd[1595]: pam_unix(sshd:session): session closed for user core May 9 00:40:07.385600 systemd[1]: sshd@5-10.0.0.160:22-10.0.0.1:50078.service: Deactivated successfully. May 9 00:40:07.387270 systemd[1]: session-6.scope: Deactivated successfully. May 9 00:40:07.388509 systemd-logind[1441]: Session 6 logged out. Waiting for processes to exit. May 9 00:40:07.404151 systemd[1]: Started sshd@6-10.0.0.160:22-10.0.0.1:50080.service - OpenSSH per-connection server daemon (10.0.0.1:50080). May 9 00:40:07.404954 systemd-logind[1441]: Removed session 6. May 9 00:40:07.439327 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 50080 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:40:07.440923 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:40:07.444650 systemd-logind[1441]: New session 7 of user core. May 9 00:40:07.453043 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 00:40:07.504912 sudo[1631]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 00:40:07.505268 sudo[1631]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:40:07.526201 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 9 00:40:07.545717 systemd[1]: coreos-metadata.service: Deactivated successfully. May 9 00:40:07.545990 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 9 00:40:07.976024 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:40:07.994128 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:40:08.015265 systemd[1]: Reloading requested from client PID 1673 ('systemctl') (unit session-7.scope)... May 9 00:40:08.015282 systemd[1]: Reloading... May 9 00:40:08.097958 zram_generator::config[1715]: No configuration found. May 9 00:40:08.660814 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:40:08.736462 systemd[1]: Reloading finished in 720 ms. May 9 00:40:08.783872 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 9 00:40:08.784037 systemd[1]: kubelet.service: Failed with result 'signal'. May 9 00:40:08.784316 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:40:08.786791 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:40:08.942772 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:40:08.947376 (kubelet)[1760]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:40:08.987889 kubelet[1760]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:40:08.987889 kubelet[1760]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 9 00:40:08.987889 kubelet[1760]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:40:08.988255 kubelet[1760]: I0509 00:40:08.987963 1760 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 00:40:09.369581 kubelet[1760]: I0509 00:40:09.369490 1760 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 9 00:40:09.369581 kubelet[1760]: I0509 00:40:09.369526 1760 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 00:40:09.369838 kubelet[1760]: I0509 00:40:09.369816 1760 server.go:954] "Client rotation is on, will bootstrap in background" May 9 00:40:09.388092 kubelet[1760]: I0509 00:40:09.388040 1760 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:40:09.395587 kubelet[1760]: E0509 00:40:09.395533 1760 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 9 00:40:09.395587 kubelet[1760]: I0509 00:40:09.395581 1760 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 9 00:40:09.401179 kubelet[1760]: I0509 00:40:09.401090 1760 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 00:40:09.402228 kubelet[1760]: I0509 00:40:09.402191 1760 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 00:40:09.402776 kubelet[1760]: I0509 00:40:09.402335 1760 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.160","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 00:40:09.402776 kubelet[1760]: I0509 00:40:09.402772 1760 topology_manager.go:138] "Creating topology manager with none policy" May 9 00:40:09.402776 kubelet[1760]: I0509 00:40:09.402784 1760 container_manager_linux.go:304] "Creating device plugin manager" May 9 00:40:09.403046 kubelet[1760]: I0509 00:40:09.403013 1760 state_mem.go:36] "Initialized new in-memory state store" May 9 00:40:09.405875 kubelet[1760]: I0509 00:40:09.405844 1760 kubelet.go:446] "Attempting to sync node with API server" May 9 00:40:09.405875 kubelet[1760]: I0509 00:40:09.405871 1760 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 00:40:09.405952 kubelet[1760]: I0509 00:40:09.405895 1760 kubelet.go:352] "Adding apiserver pod source" May 9 00:40:09.405952 kubelet[1760]: I0509 00:40:09.405908 1760 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 00:40:09.406056 kubelet[1760]: E0509 00:40:09.405997 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:09.406056 kubelet[1760]: E0509 00:40:09.406051 1760 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:09.408537 kubelet[1760]: I0509 00:40:09.408508 1760 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 9 00:40:09.408923 kubelet[1760]: I0509 00:40:09.408903 1760 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 00:40:09.409427 kubelet[1760]: W0509 00:40:09.409406 1760 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 00:40:09.410508 kubelet[1760]: W0509 00:40:09.410387 1760 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.160" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 9 00:40:09.410508 kubelet[1760]: E0509 00:40:09.410418 1760 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.160\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" May 9 00:40:09.410634 kubelet[1760]: W0509 00:40:09.410522 1760 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 9 00:40:09.410634 kubelet[1760]: E0509 00:40:09.410536 1760 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" May 9 00:40:09.411400 kubelet[1760]: I0509 00:40:09.411376 1760 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 9 00:40:09.411435 kubelet[1760]: I0509 00:40:09.411413 1760 server.go:1287] "Started kubelet" May 9 00:40:09.411641 kubelet[1760]: I0509 00:40:09.411474 1760 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 9 00:40:09.412517 kubelet[1760]: I0509 00:40:09.412361 1760 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 00:40:09.412517 kubelet[1760]: I0509 00:40:09.412463 1760 server.go:490] "Adding debug handlers to kubelet server" May 9 00:40:09.413059 kubelet[1760]: I0509 00:40:09.413032 1760 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 00:40:09.413690 kubelet[1760]: I0509 00:40:09.413515 1760 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 00:40:09.413690 kubelet[1760]: I0509 00:40:09.413552 1760 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 00:40:09.413776 kubelet[1760]: I0509 00:40:09.413751 1760 volume_manager.go:297] "Starting Kubelet Volume Manager" May 9 00:40:09.413970 kubelet[1760]: E0509 00:40:09.413918 1760 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.160\" not found" May 9 00:40:09.414305 kubelet[1760]: I0509 00:40:09.414194 1760 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 00:40:09.414819 kubelet[1760]: I0509 00:40:09.414664 1760 reconciler.go:26] "Reconciler: start to sync state" May 9 00:40:09.418515 kubelet[1760]: E0509 00:40:09.418491 1760 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 00:40:09.418750 kubelet[1760]: E0509 00:40:09.418718 1760 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.160\" not found" node="10.0.0.160" May 9 00:40:09.419229 kubelet[1760]: I0509 00:40:09.419209 1760 factory.go:221] Registration of the containerd container factory successfully May 9 00:40:09.419229 kubelet[1760]: I0509 00:40:09.419223 1760 factory.go:221] Registration of the systemd container factory successfully May 9 00:40:09.419312 kubelet[1760]: I0509 00:40:09.419285 1760 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 00:40:09.428562 kubelet[1760]: I0509 00:40:09.428539 1760 cpu_manager.go:221] "Starting CPU manager" policy="none" May 9 00:40:09.428562 kubelet[1760]: I0509 00:40:09.428564 1760 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 9 00:40:09.428644 kubelet[1760]: I0509 00:40:09.428578 1760 state_mem.go:36] "Initialized new in-memory state store" May 9 00:40:09.514899 kubelet[1760]: E0509 00:40:09.514850 1760 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.160\" not found" May 9 00:40:09.615818 kubelet[1760]: E0509 00:40:09.615780 1760 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.160\" not found" May 9 00:40:09.716243 kubelet[1760]: E0509 00:40:09.716137 1760 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.160\" not found" May 9 00:40:09.816562 kubelet[1760]: E0509 00:40:09.816516 1760 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.160\" not found" May 9 00:40:09.829667 kubelet[1760]: E0509 00:40:09.829641 1760 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.160" not found May 9 00:40:09.862784 kubelet[1760]: I0509 00:40:09.862734 1760 policy_none.go:49] "None policy: Start" May 9 00:40:09.862784 kubelet[1760]: I0509 00:40:09.862756 1760 memory_manager.go:186] "Starting memorymanager" policy="None" May 9 00:40:09.862784 kubelet[1760]: I0509 00:40:09.862768 1760 state_mem.go:35] "Initializing new in-memory state store" May 9 00:40:09.873017 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 9 00:40:09.884341 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 9 00:40:09.889022 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 9 00:40:09.889545 kubelet[1760]: I0509 00:40:09.889503 1760 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 00:40:09.890795 kubelet[1760]: I0509 00:40:09.890754 1760 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 00:40:09.890795 kubelet[1760]: I0509 00:40:09.890783 1760 status_manager.go:227] "Starting to sync pod status with apiserver" May 9 00:40:09.890886 kubelet[1760]: I0509 00:40:09.890803 1760 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 9 00:40:09.890886 kubelet[1760]: I0509 00:40:09.890812 1760 kubelet.go:2388] "Starting kubelet main sync loop" May 9 00:40:09.891653 kubelet[1760]: E0509 00:40:09.890950 1760 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 00:40:09.895823 kubelet[1760]: I0509 00:40:09.895792 1760 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 00:40:09.896031 kubelet[1760]: I0509 00:40:09.895985 1760 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 00:40:09.896092 kubelet[1760]: I0509 00:40:09.896020 1760 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 00:40:09.896356 kubelet[1760]: I0509 00:40:09.896338 1760 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 00:40:09.896687 kubelet[1760]: E0509 00:40:09.896669 1760 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 9 00:40:09.896741 kubelet[1760]: E0509 00:40:09.896699 1760 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.160\" not found" May 9 00:40:09.997764 kubelet[1760]: I0509 00:40:09.997662 1760 kubelet_node_status.go:76] "Attempting to register node" node="10.0.0.160" May 9 00:40:10.001117 kubelet[1760]: I0509 00:40:10.001074 1760 kubelet_node_status.go:79] "Successfully registered node" node="10.0.0.160" May 9 00:40:10.001162 kubelet[1760]: E0509 00:40:10.001117 1760 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.0.0.160\": node \"10.0.0.160\" not found" May 9 00:40:10.004562 kubelet[1760]: E0509 00:40:10.004536 1760 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.160\" not found" May 9 00:40:10.104834 kubelet[1760]: E0509 00:40:10.104781 1760 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.160\" not found" May 9 00:40:10.205620 kubelet[1760]: E0509 00:40:10.205575 1760 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.160\" not found" May 9 00:40:10.306320 kubelet[1760]: E0509 00:40:10.306174 1760 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.160\" not found" May 9 00:40:10.371870 kubelet[1760]: I0509 00:40:10.371812 1760 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 9 00:40:10.372118 kubelet[1760]: W0509 00:40:10.372094 1760 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 9 00:40:10.372152 kubelet[1760]: W0509 00:40:10.372119 1760 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 9 00:40:10.406244 kubelet[1760]: E0509 00:40:10.406145 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:10.406382 kubelet[1760]: E0509 00:40:10.406276 1760 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.160\" not found" May 9 00:40:10.507291 kubelet[1760]: E0509 00:40:10.507244 1760 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.160\" not found" May 9 00:40:10.517504 sudo[1631]: pam_unix(sudo:session): session closed for user root May 9 00:40:10.519140 sshd[1628]: pam_unix(sshd:session): session closed for user core May 9 00:40:10.523456 systemd[1]: sshd@6-10.0.0.160:22-10.0.0.1:50080.service: Deactivated successfully. May 9 00:40:10.525345 systemd[1]: session-7.scope: Deactivated successfully. May 9 00:40:10.526017 systemd-logind[1441]: Session 7 logged out. Waiting for processes to exit. May 9 00:40:10.526855 systemd-logind[1441]: Removed session 7. May 9 00:40:10.608104 kubelet[1760]: E0509 00:40:10.607964 1760 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.160\" not found" May 9 00:40:10.709009 kubelet[1760]: I0509 00:40:10.708971 1760 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 9 00:40:10.709332 containerd[1456]: time="2025-05-09T00:40:10.709298803Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 00:40:10.709772 kubelet[1760]: I0509 00:40:10.709435 1760 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 9 00:40:11.407076 kubelet[1760]: E0509 00:40:11.407029 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:11.407521 kubelet[1760]: I0509 00:40:11.407097 1760 apiserver.go:52] "Watching apiserver" May 9 00:40:11.415202 kubelet[1760]: I0509 00:40:11.415181 1760 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 00:40:11.415608 systemd[1]: Created slice kubepods-besteffort-pod9fc3300f_fd62_45b8_8709_3f2e35a9ddcc.slice - libcontainer container kubepods-besteffort-pod9fc3300f_fd62_45b8_8709_3f2e35a9ddcc.slice. May 9 00:40:11.424936 systemd[1]: Created slice kubepods-burstable-pod6ee37777_c366_4620_abd4_56b1094e3dff.slice - libcontainer container kubepods-burstable-pod6ee37777_c366_4620_abd4_56b1094e3dff.slice. May 9 00:40:11.425721 kubelet[1760]: I0509 00:40:11.425696 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-cilium-cgroup\") pod \"cilium-7kdqz\" (UID: \"6ee37777-c366-4620-abd4-56b1094e3dff\") " pod="kube-system/cilium-7kdqz" May 9 00:40:11.425806 kubelet[1760]: I0509 00:40:11.425730 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl8f6\" (UniqueName: \"kubernetes.io/projected/6ee37777-c366-4620-abd4-56b1094e3dff-kube-api-access-bl8f6\") pod \"cilium-7kdqz\" (UID: \"6ee37777-c366-4620-abd4-56b1094e3dff\") " pod="kube-system/cilium-7kdqz" May 9 00:40:11.425806 kubelet[1760]: I0509 00:40:11.425751 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-cilium-run\") pod \"cilium-7kdqz\" (UID: \"6ee37777-c366-4620-abd4-56b1094e3dff\") " pod="kube-system/cilium-7kdqz" May 9 00:40:11.425806 kubelet[1760]: I0509 00:40:11.425769 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-hostproc\") pod \"cilium-7kdqz\" (UID: \"6ee37777-c366-4620-abd4-56b1094e3dff\") " pod="kube-system/cilium-7kdqz" May 9 00:40:11.425806 kubelet[1760]: I0509 00:40:11.425786 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6ee37777-c366-4620-abd4-56b1094e3dff-hubble-tls\") pod \"cilium-7kdqz\" (UID: \"6ee37777-c366-4620-abd4-56b1094e3dff\") " pod="kube-system/cilium-7kdqz" May 9 00:40:11.425943 kubelet[1760]: I0509 00:40:11.425813 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-host-proc-sys-net\") pod \"cilium-7kdqz\" (UID: \"6ee37777-c366-4620-abd4-56b1094e3dff\") " pod="kube-system/cilium-7kdqz" May 9 00:40:11.425943 kubelet[1760]: I0509 00:40:11.425833 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-host-proc-sys-kernel\") pod \"cilium-7kdqz\" (UID: \"6ee37777-c366-4620-abd4-56b1094e3dff\") " pod="kube-system/cilium-7kdqz" May 9 00:40:11.425943 kubelet[1760]: I0509 00:40:11.425847 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6ee37777-c366-4620-abd4-56b1094e3dff-clustermesh-secrets\") pod \"cilium-7kdqz\" (UID: \"6ee37777-c366-4620-abd4-56b1094e3dff\") " pod="kube-system/cilium-7kdqz" May 9 00:40:11.425943 kubelet[1760]: I0509 00:40:11.425860 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fc3300f-fd62-45b8-8709-3f2e35a9ddcc-xtables-lock\") pod \"kube-proxy-zlzwn\" (UID: \"9fc3300f-fd62-45b8-8709-3f2e35a9ddcc\") " pod="kube-system/kube-proxy-zlzwn" May 9 00:40:11.425943 kubelet[1760]: I0509 00:40:11.425873 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fc3300f-fd62-45b8-8709-3f2e35a9ddcc-lib-modules\") pod \"kube-proxy-zlzwn\" (UID: \"9fc3300f-fd62-45b8-8709-3f2e35a9ddcc\") " pod="kube-system/kube-proxy-zlzwn" May 9 00:40:11.426108 kubelet[1760]: I0509 00:40:11.425886 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h45fb\" (UniqueName: \"kubernetes.io/projected/9fc3300f-fd62-45b8-8709-3f2e35a9ddcc-kube-api-access-h45fb\") pod \"kube-proxy-zlzwn\" (UID: \"9fc3300f-fd62-45b8-8709-3f2e35a9ddcc\") " pod="kube-system/kube-proxy-zlzwn" May 9 00:40:11.426108 kubelet[1760]: I0509 00:40:11.425913 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-lib-modules\") pod \"cilium-7kdqz\" (UID: \"6ee37777-c366-4620-abd4-56b1094e3dff\") " pod="kube-system/cilium-7kdqz" May 9 00:40:11.426108 kubelet[1760]: I0509 00:40:11.425965 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-xtables-lock\") pod \"cilium-7kdqz\" (UID: \"6ee37777-c366-4620-abd4-56b1094e3dff\") " pod="kube-system/cilium-7kdqz" May 9 00:40:11.426108 kubelet[1760]: I0509 00:40:11.426003 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-etc-cni-netd\") pod \"cilium-7kdqz\" (UID: \"6ee37777-c366-4620-abd4-56b1094e3dff\") " pod="kube-system/cilium-7kdqz" May 9 00:40:11.426108 kubelet[1760]: I0509 00:40:11.426024 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6ee37777-c366-4620-abd4-56b1094e3dff-cilium-config-path\") pod \"cilium-7kdqz\" (UID: \"6ee37777-c366-4620-abd4-56b1094e3dff\") " pod="kube-system/cilium-7kdqz" May 9 00:40:11.426214 kubelet[1760]: I0509 00:40:11.426044 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9fc3300f-fd62-45b8-8709-3f2e35a9ddcc-kube-proxy\") pod \"kube-proxy-zlzwn\" (UID: \"9fc3300f-fd62-45b8-8709-3f2e35a9ddcc\") " pod="kube-system/kube-proxy-zlzwn" May 9 00:40:11.426345 kubelet[1760]: I0509 00:40:11.426323 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-bpf-maps\") pod \"cilium-7kdqz\" (UID: \"6ee37777-c366-4620-abd4-56b1094e3dff\") " pod="kube-system/cilium-7kdqz" May 9 00:40:11.426385 kubelet[1760]: I0509 00:40:11.426365 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-cni-path\") pod \"cilium-7kdqz\" (UID: \"6ee37777-c366-4620-abd4-56b1094e3dff\") " pod="kube-system/cilium-7kdqz" May 9 00:40:11.724411 kubelet[1760]: E0509 00:40:11.724300 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:11.725024 containerd[1456]: time="2025-05-09T00:40:11.724971084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zlzwn,Uid:9fc3300f-fd62-45b8-8709-3f2e35a9ddcc,Namespace:kube-system,Attempt:0,}" May 9 00:40:11.737727 kubelet[1760]: E0509 00:40:11.737706 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:11.738047 containerd[1456]: time="2025-05-09T00:40:11.738022675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7kdqz,Uid:6ee37777-c366-4620-abd4-56b1094e3dff,Namespace:kube-system,Attempt:0,}" May 9 00:40:12.407741 kubelet[1760]: E0509 00:40:12.407702 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:12.511288 containerd[1456]: time="2025-05-09T00:40:12.511246171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:40:12.512094 containerd[1456]: time="2025-05-09T00:40:12.512047094Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:40:12.512913 containerd[1456]: time="2025-05-09T00:40:12.512885186Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 9 00:40:12.513698 containerd[1456]: time="2025-05-09T00:40:12.513669697Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 00:40:12.514563 containerd[1456]: time="2025-05-09T00:40:12.514528077Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:40:12.517104 containerd[1456]: time="2025-05-09T00:40:12.517053134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:40:12.517916 containerd[1456]: time="2025-05-09T00:40:12.517881077Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 792.827197ms" May 9 00:40:12.519735 containerd[1456]: time="2025-05-09T00:40:12.519711219Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 781.634723ms" May 9 00:40:12.530896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount687371913.mount: Deactivated successfully. May 9 00:40:12.627062 containerd[1456]: time="2025-05-09T00:40:12.626914923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:40:12.627062 containerd[1456]: time="2025-05-09T00:40:12.626996526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:40:12.627411 containerd[1456]: time="2025-05-09T00:40:12.627154182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:40:12.627411 containerd[1456]: time="2025-05-09T00:40:12.627309022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:40:12.628009 containerd[1456]: time="2025-05-09T00:40:12.627831472Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:40:12.628009 containerd[1456]: time="2025-05-09T00:40:12.627876787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:40:12.628009 containerd[1456]: time="2025-05-09T00:40:12.627890403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:40:12.629033 containerd[1456]: time="2025-05-09T00:40:12.628915646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:40:12.690058 systemd[1]: Started cri-containerd-29b78281fba5bcb1c06544d313ebd59b9f76341d56557a98fee4bcc9865785d7.scope - libcontainer container 29b78281fba5bcb1c06544d313ebd59b9f76341d56557a98fee4bcc9865785d7. May 9 00:40:12.692065 systemd[1]: Started cri-containerd-90c2c4f6870718372ec8643eb63c6e9e0c3832dd8675418994d347d5fc988417.scope - libcontainer container 90c2c4f6870718372ec8643eb63c6e9e0c3832dd8675418994d347d5fc988417. May 9 00:40:12.712495 containerd[1456]: time="2025-05-09T00:40:12.712451806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7kdqz,Uid:6ee37777-c366-4620-abd4-56b1094e3dff,Namespace:kube-system,Attempt:0,} returns sandbox id \"29b78281fba5bcb1c06544d313ebd59b9f76341d56557a98fee4bcc9865785d7\"" May 9 00:40:12.713479 kubelet[1760]: E0509 00:40:12.713459 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:12.714646 containerd[1456]: time="2025-05-09T00:40:12.714620133Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 9 00:40:12.716888 containerd[1456]: time="2025-05-09T00:40:12.716854073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zlzwn,Uid:9fc3300f-fd62-45b8-8709-3f2e35a9ddcc,Namespace:kube-system,Attempt:0,} returns sandbox id \"90c2c4f6870718372ec8643eb63c6e9e0c3832dd8675418994d347d5fc988417\"" May 9 00:40:12.717763 kubelet[1760]: E0509 00:40:12.717728 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:13.408295 kubelet[1760]: E0509 00:40:13.408250 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:14.409005 kubelet[1760]: E0509 00:40:14.408966 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:15.410032 kubelet[1760]: E0509 00:40:15.409994 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:16.067413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1458863164.mount: Deactivated successfully. May 9 00:40:16.411009 kubelet[1760]: E0509 00:40:16.410959 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:17.412179 kubelet[1760]: E0509 00:40:17.412121 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:18.413196 kubelet[1760]: E0509 00:40:18.413152 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:18.556362 containerd[1456]: time="2025-05-09T00:40:18.556314582Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:40:18.556993 containerd[1456]: time="2025-05-09T00:40:18.556960594Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 9 00:40:18.558066 containerd[1456]: time="2025-05-09T00:40:18.558026473Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:40:18.559502 containerd[1456]: time="2025-05-09T00:40:18.559471253Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.844729401s" May 9 00:40:18.559536 containerd[1456]: time="2025-05-09T00:40:18.559499426Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 9 00:40:18.560396 containerd[1456]: time="2025-05-09T00:40:18.560363637Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 9 00:40:18.564576 containerd[1456]: time="2025-05-09T00:40:18.564537736Z" level=info msg="CreateContainer within sandbox \"29b78281fba5bcb1c06544d313ebd59b9f76341d56557a98fee4bcc9865785d7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 00:40:18.577473 containerd[1456]: time="2025-05-09T00:40:18.577434787Z" level=info msg="CreateContainer within sandbox \"29b78281fba5bcb1c06544d313ebd59b9f76341d56557a98fee4bcc9865785d7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4b80ac3a4cbe46da0f33d3b66650824673a57714a7ec8a261475309ed56b7a97\"" May 9 00:40:18.578153 containerd[1456]: time="2025-05-09T00:40:18.578114422Z" level=info msg="StartContainer for \"4b80ac3a4cbe46da0f33d3b66650824673a57714a7ec8a261475309ed56b7a97\"" May 9 00:40:18.601534 systemd[1]: run-containerd-runc-k8s.io-4b80ac3a4cbe46da0f33d3b66650824673a57714a7ec8a261475309ed56b7a97-runc.R3haHx.mount: Deactivated successfully. May 9 00:40:18.610062 systemd[1]: Started cri-containerd-4b80ac3a4cbe46da0f33d3b66650824673a57714a7ec8a261475309ed56b7a97.scope - libcontainer container 4b80ac3a4cbe46da0f33d3b66650824673a57714a7ec8a261475309ed56b7a97. May 9 00:40:18.635521 containerd[1456]: time="2025-05-09T00:40:18.635475675Z" level=info msg="StartContainer for \"4b80ac3a4cbe46da0f33d3b66650824673a57714a7ec8a261475309ed56b7a97\" returns successfully" May 9 00:40:18.644128 systemd[1]: cri-containerd-4b80ac3a4cbe46da0f33d3b66650824673a57714a7ec8a261475309ed56b7a97.scope: Deactivated successfully. May 9 00:40:18.905410 kubelet[1760]: E0509 00:40:18.905360 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:19.085037 containerd[1456]: time="2025-05-09T00:40:19.084962835Z" level=info msg="shim disconnected" id=4b80ac3a4cbe46da0f33d3b66650824673a57714a7ec8a261475309ed56b7a97 namespace=k8s.io May 9 00:40:19.085037 containerd[1456]: time="2025-05-09T00:40:19.085020102Z" level=warning msg="cleaning up after shim disconnected" id=4b80ac3a4cbe46da0f33d3b66650824673a57714a7ec8a261475309ed56b7a97 namespace=k8s.io May 9 00:40:19.085037 containerd[1456]: time="2025-05-09T00:40:19.085029340Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:40:19.413693 kubelet[1760]: E0509 00:40:19.413638 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:19.574849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b80ac3a4cbe46da0f33d3b66650824673a57714a7ec8a261475309ed56b7a97-rootfs.mount: Deactivated successfully. May 9 00:40:19.908033 kubelet[1760]: E0509 00:40:19.908001 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:19.909482 containerd[1456]: time="2025-05-09T00:40:19.909446260Z" level=info msg="CreateContainer within sandbox \"29b78281fba5bcb1c06544d313ebd59b9f76341d56557a98fee4bcc9865785d7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 00:40:19.925368 containerd[1456]: time="2025-05-09T00:40:19.925317220Z" level=info msg="CreateContainer within sandbox \"29b78281fba5bcb1c06544d313ebd59b9f76341d56557a98fee4bcc9865785d7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b125ad104ede6a92db8e0c614180cc079cf63f7816a90ad7ea0abd274d82e478\"" May 9 00:40:19.925859 containerd[1456]: time="2025-05-09T00:40:19.925809423Z" level=info msg="StartContainer for \"b125ad104ede6a92db8e0c614180cc079cf63f7816a90ad7ea0abd274d82e478\"" May 9 00:40:19.952074 systemd[1]: Started cri-containerd-b125ad104ede6a92db8e0c614180cc079cf63f7816a90ad7ea0abd274d82e478.scope - libcontainer container b125ad104ede6a92db8e0c614180cc079cf63f7816a90ad7ea0abd274d82e478. May 9 00:40:19.978169 containerd[1456]: time="2025-05-09T00:40:19.978115574Z" level=info msg="StartContainer for \"b125ad104ede6a92db8e0c614180cc079cf63f7816a90ad7ea0abd274d82e478\" returns successfully" May 9 00:40:19.987078 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 00:40:19.987311 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 00:40:19.987376 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 9 00:40:19.998343 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:40:19.998618 systemd[1]: cri-containerd-b125ad104ede6a92db8e0c614180cc079cf63f7816a90ad7ea0abd274d82e478.scope: Deactivated successfully. May 9 00:40:20.013841 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:40:20.134753 containerd[1456]: time="2025-05-09T00:40:20.134674905Z" level=info msg="shim disconnected" id=b125ad104ede6a92db8e0c614180cc079cf63f7816a90ad7ea0abd274d82e478 namespace=k8s.io May 9 00:40:20.134753 containerd[1456]: time="2025-05-09T00:40:20.134722905Z" level=warning msg="cleaning up after shim disconnected" id=b125ad104ede6a92db8e0c614180cc079cf63f7816a90ad7ea0abd274d82e478 namespace=k8s.io May 9 00:40:20.134753 containerd[1456]: time="2025-05-09T00:40:20.134738885Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:40:20.389626 containerd[1456]: time="2025-05-09T00:40:20.389583350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:40:20.390366 containerd[1456]: time="2025-05-09T00:40:20.390301267Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" May 9 00:40:20.393068 containerd[1456]: time="2025-05-09T00:40:20.391426367Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:40:20.395316 containerd[1456]: time="2025-05-09T00:40:20.395284053Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 1.834881083s" May 9 00:40:20.395372 containerd[1456]: time="2025-05-09T00:40:20.395317055Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 9 00:40:20.395915 containerd[1456]: time="2025-05-09T00:40:20.395827342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:40:20.397160 containerd[1456]: time="2025-05-09T00:40:20.397131188Z" level=info msg="CreateContainer within sandbox \"90c2c4f6870718372ec8643eb63c6e9e0c3832dd8675418994d347d5fc988417\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 00:40:20.412454 containerd[1456]: time="2025-05-09T00:40:20.412411710Z" level=info msg="CreateContainer within sandbox \"90c2c4f6870718372ec8643eb63c6e9e0c3832dd8675418994d347d5fc988417\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"54d1f7b8e008617cb9a8b765266614725ebbcde32605f1f0f5578154eb8e4d4f\"" May 9 00:40:20.413043 containerd[1456]: time="2025-05-09T00:40:20.412752569Z" level=info msg="StartContainer for \"54d1f7b8e008617cb9a8b765266614725ebbcde32605f1f0f5578154eb8e4d4f\"" May 9 00:40:20.413921 kubelet[1760]: E0509 00:40:20.413894 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:20.443065 systemd[1]: Started cri-containerd-54d1f7b8e008617cb9a8b765266614725ebbcde32605f1f0f5578154eb8e4d4f.scope - libcontainer container 54d1f7b8e008617cb9a8b765266614725ebbcde32605f1f0f5578154eb8e4d4f. May 9 00:40:20.469577 containerd[1456]: time="2025-05-09T00:40:20.469533594Z" level=info msg="StartContainer for \"54d1f7b8e008617cb9a8b765266614725ebbcde32605f1f0f5578154eb8e4d4f\" returns successfully" May 9 00:40:20.574433 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b125ad104ede6a92db8e0c614180cc079cf63f7816a90ad7ea0abd274d82e478-rootfs.mount: Deactivated successfully. May 9 00:40:20.574543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount649675606.mount: Deactivated successfully. May 9 00:40:20.910912 kubelet[1760]: E0509 00:40:20.910876 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:20.912171 kubelet[1760]: E0509 00:40:20.912140 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:20.912524 containerd[1456]: time="2025-05-09T00:40:20.912492318Z" level=info msg="CreateContainer within sandbox \"29b78281fba5bcb1c06544d313ebd59b9f76341d56557a98fee4bcc9865785d7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 00:40:20.928524 containerd[1456]: time="2025-05-09T00:40:20.928462513Z" level=info msg="CreateContainer within sandbox \"29b78281fba5bcb1c06544d313ebd59b9f76341d56557a98fee4bcc9865785d7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3ebcad3d505963e57963f45384f32a2629e2f2b8e365a4d0ada86c5899016900\"" May 9 00:40:20.928890 containerd[1456]: time="2025-05-09T00:40:20.928866020Z" level=info msg="StartContainer for \"3ebcad3d505963e57963f45384f32a2629e2f2b8e365a4d0ada86c5899016900\"" May 9 00:40:20.933899 kubelet[1760]: I0509 00:40:20.933854 1760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zlzwn" podStartSLOduration=3.256027859 podStartE2EDuration="10.933840661s" podCreationTimestamp="2025-05-09 00:40:10 +0000 UTC" firstStartedPulling="2025-05-09 00:40:12.718181783 +0000 UTC m=+3.766550694" lastFinishedPulling="2025-05-09 00:40:20.395994586 +0000 UTC m=+11.444363496" observedRunningTime="2025-05-09 00:40:20.933725114 +0000 UTC m=+11.982094015" watchObservedRunningTime="2025-05-09 00:40:20.933840661 +0000 UTC m=+11.982209571" May 9 00:40:20.963077 systemd[1]: Started cri-containerd-3ebcad3d505963e57963f45384f32a2629e2f2b8e365a4d0ada86c5899016900.scope - libcontainer container 3ebcad3d505963e57963f45384f32a2629e2f2b8e365a4d0ada86c5899016900. May 9 00:40:20.988808 containerd[1456]: time="2025-05-09T00:40:20.988763531Z" level=info msg="StartContainer for \"3ebcad3d505963e57963f45384f32a2629e2f2b8e365a4d0ada86c5899016900\" returns successfully" May 9 00:40:20.990362 systemd[1]: cri-containerd-3ebcad3d505963e57963f45384f32a2629e2f2b8e365a4d0ada86c5899016900.scope: Deactivated successfully. May 9 00:40:21.305098 containerd[1456]: time="2025-05-09T00:40:21.304969453Z" level=info msg="shim disconnected" id=3ebcad3d505963e57963f45384f32a2629e2f2b8e365a4d0ada86c5899016900 namespace=k8s.io May 9 00:40:21.305098 containerd[1456]: time="2025-05-09T00:40:21.305019457Z" level=warning msg="cleaning up after shim disconnected" id=3ebcad3d505963e57963f45384f32a2629e2f2b8e365a4d0ada86c5899016900 namespace=k8s.io May 9 00:40:21.305098 containerd[1456]: time="2025-05-09T00:40:21.305027942Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:40:21.414258 kubelet[1760]: E0509 00:40:21.414223 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:21.573448 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ebcad3d505963e57963f45384f32a2629e2f2b8e365a4d0ada86c5899016900-rootfs.mount: Deactivated successfully. May 9 00:40:21.914374 kubelet[1760]: E0509 00:40:21.914183 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:21.914374 kubelet[1760]: E0509 00:40:21.914273 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:21.915592 containerd[1456]: time="2025-05-09T00:40:21.915560888Z" level=info msg="CreateContainer within sandbox \"29b78281fba5bcb1c06544d313ebd59b9f76341d56557a98fee4bcc9865785d7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 00:40:21.931229 containerd[1456]: time="2025-05-09T00:40:21.931175817Z" level=info msg="CreateContainer within sandbox \"29b78281fba5bcb1c06544d313ebd59b9f76341d56557a98fee4bcc9865785d7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"30b90fae09b23f19e1ce0e62a9659230953e0993e0c73f711e255f21462932d1\"" May 9 00:40:21.931552 containerd[1456]: time="2025-05-09T00:40:21.931529831Z" level=info msg="StartContainer for \"30b90fae09b23f19e1ce0e62a9659230953e0993e0c73f711e255f21462932d1\"" May 9 00:40:21.958066 systemd[1]: Started cri-containerd-30b90fae09b23f19e1ce0e62a9659230953e0993e0c73f711e255f21462932d1.scope - libcontainer container 30b90fae09b23f19e1ce0e62a9659230953e0993e0c73f711e255f21462932d1. May 9 00:40:21.980724 systemd[1]: cri-containerd-30b90fae09b23f19e1ce0e62a9659230953e0993e0c73f711e255f21462932d1.scope: Deactivated successfully. May 9 00:40:21.982049 containerd[1456]: time="2025-05-09T00:40:21.982011410Z" level=info msg="StartContainer for \"30b90fae09b23f19e1ce0e62a9659230953e0993e0c73f711e255f21462932d1\" returns successfully" May 9 00:40:22.002237 containerd[1456]: time="2025-05-09T00:40:22.002178979Z" level=info msg="shim disconnected" id=30b90fae09b23f19e1ce0e62a9659230953e0993e0c73f711e255f21462932d1 namespace=k8s.io May 9 00:40:22.002237 containerd[1456]: time="2025-05-09T00:40:22.002229243Z" level=warning msg="cleaning up after shim disconnected" id=30b90fae09b23f19e1ce0e62a9659230953e0993e0c73f711e255f21462932d1 namespace=k8s.io May 9 00:40:22.002237 containerd[1456]: time="2025-05-09T00:40:22.002238119Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:40:22.415227 kubelet[1760]: E0509 00:40:22.415202 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:22.573562 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30b90fae09b23f19e1ce0e62a9659230953e0993e0c73f711e255f21462932d1-rootfs.mount: Deactivated successfully. May 9 00:40:22.919102 kubelet[1760]: E0509 00:40:22.919062 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:22.920500 containerd[1456]: time="2025-05-09T00:40:22.920464319Z" level=info msg="CreateContainer within sandbox \"29b78281fba5bcb1c06544d313ebd59b9f76341d56557a98fee4bcc9865785d7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 00:40:22.936172 containerd[1456]: time="2025-05-09T00:40:22.936132579Z" level=info msg="CreateContainer within sandbox \"29b78281fba5bcb1c06544d313ebd59b9f76341d56557a98fee4bcc9865785d7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"df8d9d16bb8db17cfd943867a387054ba5b0d36ca9fb77398a7e02b34fb654e8\"" May 9 00:40:22.937253 containerd[1456]: time="2025-05-09T00:40:22.936584626Z" level=info msg="StartContainer for \"df8d9d16bb8db17cfd943867a387054ba5b0d36ca9fb77398a7e02b34fb654e8\"" May 9 00:40:22.965084 systemd[1]: Started cri-containerd-df8d9d16bb8db17cfd943867a387054ba5b0d36ca9fb77398a7e02b34fb654e8.scope - libcontainer container df8d9d16bb8db17cfd943867a387054ba5b0d36ca9fb77398a7e02b34fb654e8. May 9 00:40:22.990748 containerd[1456]: time="2025-05-09T00:40:22.990699180Z" level=info msg="StartContainer for \"df8d9d16bb8db17cfd943867a387054ba5b0d36ca9fb77398a7e02b34fb654e8\" returns successfully" May 9 00:40:23.101502 kubelet[1760]: I0509 00:40:23.101281 1760 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 9 00:40:23.415946 kubelet[1760]: E0509 00:40:23.415889 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:23.442957 kernel: Initializing XFRM netlink socket May 9 00:40:23.922669 kubelet[1760]: E0509 00:40:23.922645 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:23.933914 kubelet[1760]: I0509 00:40:23.933852 1760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7kdqz" podStartSLOduration=8.087872419 podStartE2EDuration="13.933835805s" podCreationTimestamp="2025-05-09 00:40:10 +0000 UTC" firstStartedPulling="2025-05-09 00:40:12.714272411 +0000 UTC m=+3.762641321" lastFinishedPulling="2025-05-09 00:40:18.560235797 +0000 UTC m=+9.608604707" observedRunningTime="2025-05-09 00:40:23.933825365 +0000 UTC m=+14.982194275" watchObservedRunningTime="2025-05-09 00:40:23.933835805 +0000 UTC m=+14.982204715" May 9 00:40:24.416511 kubelet[1760]: E0509 00:40:24.416455 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:24.924127 kubelet[1760]: E0509 00:40:24.924100 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:25.127281 systemd-networkd[1403]: cilium_host: Link UP May 9 00:40:25.127451 systemd-networkd[1403]: cilium_net: Link UP May 9 00:40:25.127455 systemd-networkd[1403]: cilium_net: Gained carrier May 9 00:40:25.127922 systemd-networkd[1403]: cilium_host: Gained carrier May 9 00:40:25.228726 systemd-networkd[1403]: cilium_vxlan: Link UP May 9 00:40:25.228738 systemd-networkd[1403]: cilium_vxlan: Gained carrier May 9 00:40:25.416966 kubelet[1760]: E0509 00:40:25.416908 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:25.425967 kernel: NET: Registered PF_ALG protocol family May 9 00:40:25.779134 systemd-networkd[1403]: cilium_host: Gained IPv6LL May 9 00:40:25.907049 systemd-networkd[1403]: cilium_net: Gained IPv6LL May 9 00:40:25.926188 kubelet[1760]: E0509 00:40:25.926148 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:26.037012 systemd-networkd[1403]: lxc_health: Link UP May 9 00:40:26.048624 systemd-networkd[1403]: lxc_health: Gained carrier May 9 00:40:26.417986 kubelet[1760]: E0509 00:40:26.417941 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:26.544397 systemd[1]: Created slice kubepods-besteffort-podce97fca1_f94c_48d9_bca5_74c59c995df8.slice - libcontainer container kubepods-besteffort-podce97fca1_f94c_48d9_bca5_74c59c995df8.slice. May 9 00:40:26.611152 systemd-networkd[1403]: cilium_vxlan: Gained IPv6LL May 9 00:40:26.614862 kubelet[1760]: I0509 00:40:26.614832 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tpgh\" (UniqueName: \"kubernetes.io/projected/ce97fca1-f94c-48d9-bca5-74c59c995df8-kube-api-access-4tpgh\") pod \"nginx-deployment-7fcdb87857-m47ng\" (UID: \"ce97fca1-f94c-48d9-bca5-74c59c995df8\") " pod="default/nginx-deployment-7fcdb87857-m47ng" May 9 00:40:26.847952 containerd[1456]: time="2025-05-09T00:40:26.847891180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-m47ng,Uid:ce97fca1-f94c-48d9-bca5-74c59c995df8,Namespace:default,Attempt:0,}" May 9 00:40:26.878374 systemd-networkd[1403]: lxcca386f949b12: Link UP May 9 00:40:26.888968 kernel: eth0: renamed from tmp961a3 May 9 00:40:26.894411 systemd-networkd[1403]: lxcca386f949b12: Gained carrier May 9 00:40:27.418571 kubelet[1760]: E0509 00:40:27.418520 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:27.699119 systemd-networkd[1403]: lxc_health: Gained IPv6LL May 9 00:40:27.739655 kubelet[1760]: E0509 00:40:27.739629 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:28.418654 kubelet[1760]: E0509 00:40:28.418614 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:28.467054 systemd-networkd[1403]: lxcca386f949b12: Gained IPv6LL May 9 00:40:29.406282 kubelet[1760]: E0509 00:40:29.406244 1760 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:29.418863 kubelet[1760]: E0509 00:40:29.418836 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:30.149467 containerd[1456]: time="2025-05-09T00:40:30.149359811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:40:30.149828 containerd[1456]: time="2025-05-09T00:40:30.149489033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:40:30.149828 containerd[1456]: time="2025-05-09T00:40:30.149521164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:40:30.149828 containerd[1456]: time="2025-05-09T00:40:30.149658311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:40:30.178054 systemd[1]: Started cri-containerd-961a3a53ba699e30c9f3f07f452871161757dcc39e08ab11b7ccc16ef5de9df3.scope - libcontainer container 961a3a53ba699e30c9f3f07f452871161757dcc39e08ab11b7ccc16ef5de9df3. May 9 00:40:30.188556 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:40:30.211213 containerd[1456]: time="2025-05-09T00:40:30.211175510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-m47ng,Uid:ce97fca1-f94c-48d9-bca5-74c59c995df8,Namespace:default,Attempt:0,} returns sandbox id \"961a3a53ba699e30c9f3f07f452871161757dcc39e08ab11b7ccc16ef5de9df3\"" May 9 00:40:30.212314 containerd[1456]: time="2025-05-09T00:40:30.212274882Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 9 00:40:30.419339 kubelet[1760]: E0509 00:40:30.419248 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:31.420281 kubelet[1760]: E0509 00:40:31.420242 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:32.421019 kubelet[1760]: E0509 00:40:32.420970 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:32.526817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3576632522.mount: Deactivated successfully. May 9 00:40:33.421146 kubelet[1760]: E0509 00:40:33.421093 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:33.921261 containerd[1456]: time="2025-05-09T00:40:33.921208075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:40:33.921966 containerd[1456]: time="2025-05-09T00:40:33.921893368Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73306220" May 9 00:40:33.923119 containerd[1456]: time="2025-05-09T00:40:33.923085280Z" level=info msg="ImageCreate event name:\"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:40:33.925545 containerd[1456]: time="2025-05-09T00:40:33.925513379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:40:33.926616 containerd[1456]: time="2025-05-09T00:40:33.926576432Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"73306098\" in 3.714252497s" May 9 00:40:33.926661 containerd[1456]: time="2025-05-09T00:40:33.926615467Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 9 00:40:33.928451 containerd[1456]: time="2025-05-09T00:40:33.928404872Z" level=info msg="CreateContainer within sandbox \"961a3a53ba699e30c9f3f07f452871161757dcc39e08ab11b7ccc16ef5de9df3\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 9 00:40:33.939802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2929364257.mount: Deactivated successfully. May 9 00:40:33.941030 containerd[1456]: time="2025-05-09T00:40:33.940996036Z" level=info msg="CreateContainer within sandbox \"961a3a53ba699e30c9f3f07f452871161757dcc39e08ab11b7ccc16ef5de9df3\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"5ce32d8e2b43f8f02bb68b24ae231b5c4ad740178f134fb6aa2966180c30d98c\"" May 9 00:40:33.941431 containerd[1456]: time="2025-05-09T00:40:33.941381451Z" level=info msg="StartContainer for \"5ce32d8e2b43f8f02bb68b24ae231b5c4ad740178f134fb6aa2966180c30d98c\"" May 9 00:40:33.968056 systemd[1]: Started cri-containerd-5ce32d8e2b43f8f02bb68b24ae231b5c4ad740178f134fb6aa2966180c30d98c.scope - libcontainer container 5ce32d8e2b43f8f02bb68b24ae231b5c4ad740178f134fb6aa2966180c30d98c. May 9 00:40:33.991867 containerd[1456]: time="2025-05-09T00:40:33.991824098Z" level=info msg="StartContainer for \"5ce32d8e2b43f8f02bb68b24ae231b5c4ad740178f134fb6aa2966180c30d98c\" returns successfully" May 9 00:40:34.421824 kubelet[1760]: E0509 00:40:34.421795 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:34.948285 kubelet[1760]: I0509 00:40:34.948226 1760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-m47ng" podStartSLOduration=5.232757942 podStartE2EDuration="8.948211074s" podCreationTimestamp="2025-05-09 00:40:26 +0000 UTC" firstStartedPulling="2025-05-09 00:40:30.211906932 +0000 UTC m=+21.260275842" lastFinishedPulling="2025-05-09 00:40:33.927360064 +0000 UTC m=+24.975728974" observedRunningTime="2025-05-09 00:40:34.948175285 +0000 UTC m=+25.996544195" watchObservedRunningTime="2025-05-09 00:40:34.948211074 +0000 UTC m=+25.996579974" May 9 00:40:35.422818 kubelet[1760]: E0509 00:40:35.422775 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:36.423380 kubelet[1760]: E0509 00:40:36.423337 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:37.423648 kubelet[1760]: E0509 00:40:37.423587 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:38.010228 systemd[1]: Created slice kubepods-besteffort-pod4d129e21_1084_4577_815d_69171ed7e17a.slice - libcontainer container kubepods-besteffort-pod4d129e21_1084_4577_815d_69171ed7e17a.slice. May 9 00:40:38.068040 kubelet[1760]: I0509 00:40:38.067996 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rt4w\" (UniqueName: \"kubernetes.io/projected/4d129e21-1084-4577-815d-69171ed7e17a-kube-api-access-7rt4w\") pod \"nfs-server-provisioner-0\" (UID: \"4d129e21-1084-4577-815d-69171ed7e17a\") " pod="default/nfs-server-provisioner-0" May 9 00:40:38.068040 kubelet[1760]: I0509 00:40:38.068036 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/4d129e21-1084-4577-815d-69171ed7e17a-data\") pod \"nfs-server-provisioner-0\" (UID: \"4d129e21-1084-4577-815d-69171ed7e17a\") " pod="default/nfs-server-provisioner-0" May 9 00:40:38.313879 containerd[1456]: time="2025-05-09T00:40:38.313747291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:4d129e21-1084-4577-815d-69171ed7e17a,Namespace:default,Attempt:0,}" May 9 00:40:38.350870 systemd-networkd[1403]: lxcb26abaeb8a40: Link UP May 9 00:40:38.356952 kernel: eth0: renamed from tmp97dff May 9 00:40:38.366652 systemd-networkd[1403]: lxcb26abaeb8a40: Gained carrier May 9 00:40:38.424026 kubelet[1760]: E0509 00:40:38.423970 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:38.589785 containerd[1456]: time="2025-05-09T00:40:38.589690498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:40:38.589953 containerd[1456]: time="2025-05-09T00:40:38.589744511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:40:38.589953 containerd[1456]: time="2025-05-09T00:40:38.589793294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:40:38.589953 containerd[1456]: time="2025-05-09T00:40:38.589886243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:40:38.614058 systemd[1]: Started cri-containerd-97dff2690a68b74ab1574f0229b5a9ade84a3d1a1a31e43868cd500ccb11b130.scope - libcontainer container 97dff2690a68b74ab1574f0229b5a9ade84a3d1a1a31e43868cd500ccb11b130. May 9 00:40:38.624496 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:40:38.648520 containerd[1456]: time="2025-05-09T00:40:38.648479003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:4d129e21-1084-4577-815d-69171ed7e17a,Namespace:default,Attempt:0,} returns sandbox id \"97dff2690a68b74ab1574f0229b5a9ade84a3d1a1a31e43868cd500ccb11b130\"" May 9 00:40:38.650198 containerd[1456]: time="2025-05-09T00:40:38.650163810Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 9 00:40:39.425053 kubelet[1760]: E0509 00:40:39.425017 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:39.924126 systemd-networkd[1403]: lxcb26abaeb8a40: Gained IPv6LL May 9 00:40:40.425856 kubelet[1760]: E0509 00:40:40.425808 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:40.662236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount362392212.mount: Deactivated successfully. May 9 00:40:40.849753 kubelet[1760]: I0509 00:40:40.849198 1760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 00:40:40.849753 kubelet[1760]: E0509 00:40:40.849560 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:40.952949 kubelet[1760]: E0509 00:40:40.952586 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:41.426231 kubelet[1760]: E0509 00:40:41.426179 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:42.427078 kubelet[1760]: E0509 00:40:42.427033 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:42.952695 containerd[1456]: time="2025-05-09T00:40:42.952636190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:40:42.953394 containerd[1456]: time="2025-05-09T00:40:42.953339159Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" May 9 00:40:42.954576 containerd[1456]: time="2025-05-09T00:40:42.954545349Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:40:42.957053 containerd[1456]: time="2025-05-09T00:40:42.957025108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:40:42.958004 containerd[1456]: time="2025-05-09T00:40:42.957958036Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.307752666s" May 9 00:40:42.958004 containerd[1456]: time="2025-05-09T00:40:42.957994847Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" May 9 00:40:42.960054 containerd[1456]: time="2025-05-09T00:40:42.960011170Z" level=info msg="CreateContainer within sandbox \"97dff2690a68b74ab1574f0229b5a9ade84a3d1a1a31e43868cd500ccb11b130\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 9 00:40:42.972101 containerd[1456]: time="2025-05-09T00:40:42.972070033Z" level=info msg="CreateContainer within sandbox \"97dff2690a68b74ab1574f0229b5a9ade84a3d1a1a31e43868cd500ccb11b130\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"ad0cb9de217c17002eedbdd6b630a7c4ece847d51502b70b6d8bb79d2013f68b\"" May 9 00:40:42.972511 containerd[1456]: time="2025-05-09T00:40:42.972456770Z" level=info msg="StartContainer for \"ad0cb9de217c17002eedbdd6b630a7c4ece847d51502b70b6d8bb79d2013f68b\"" May 9 00:40:43.036064 systemd[1]: Started cri-containerd-ad0cb9de217c17002eedbdd6b630a7c4ece847d51502b70b6d8bb79d2013f68b.scope - libcontainer container ad0cb9de217c17002eedbdd6b630a7c4ece847d51502b70b6d8bb79d2013f68b. May 9 00:40:43.135607 containerd[1456]: time="2025-05-09T00:40:43.135566945Z" level=info msg="StartContainer for \"ad0cb9de217c17002eedbdd6b630a7c4ece847d51502b70b6d8bb79d2013f68b\" returns successfully" May 9 00:40:43.428153 kubelet[1760]: E0509 00:40:43.428121 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:43.966877 kubelet[1760]: I0509 00:40:43.966829 1760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.657668511 podStartE2EDuration="6.966811924s" podCreationTimestamp="2025-05-09 00:40:37 +0000 UTC" firstStartedPulling="2025-05-09 00:40:38.649656769 +0000 UTC m=+29.698025669" lastFinishedPulling="2025-05-09 00:40:42.958800172 +0000 UTC m=+34.007169082" observedRunningTime="2025-05-09 00:40:43.966370673 +0000 UTC m=+35.014739583" watchObservedRunningTime="2025-05-09 00:40:43.966811924 +0000 UTC m=+35.015180834" May 9 00:40:44.429181 kubelet[1760]: E0509 00:40:44.429149 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:45.429853 kubelet[1760]: E0509 00:40:45.429801 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:45.575533 update_engine[1442]: I20250509 00:40:45.575469 1442 update_attempter.cc:509] Updating boot flags... May 9 00:40:45.601044 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3145) May 9 00:40:45.639999 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3149) May 9 00:40:45.663973 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3149) May 9 00:40:46.430410 kubelet[1760]: E0509 00:40:46.430381 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:47.430900 kubelet[1760]: E0509 00:40:47.430867 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:48.431876 kubelet[1760]: E0509 00:40:48.431827 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:49.406286 kubelet[1760]: E0509 00:40:49.406245 1760 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:49.431941 kubelet[1760]: E0509 00:40:49.431886 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:50.432371 kubelet[1760]: E0509 00:40:50.432327 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:51.433210 kubelet[1760]: E0509 00:40:51.433178 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:52.434037 kubelet[1760]: E0509 00:40:52.433983 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:53.230118 systemd[1]: Created slice kubepods-besteffort-podde76ce05_626d_4f0a_b8e1_e882805fceb1.slice - libcontainer container kubepods-besteffort-podde76ce05_626d_4f0a_b8e1_e882805fceb1.slice. May 9 00:40:53.248271 kubelet[1760]: I0509 00:40:53.248235 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1cf42dfb-c533-4266-a391-b96675d7b964\" (UniqueName: \"kubernetes.io/nfs/de76ce05-626d-4f0a-b8e1-e882805fceb1-pvc-1cf42dfb-c533-4266-a391-b96675d7b964\") pod \"test-pod-1\" (UID: \"de76ce05-626d-4f0a-b8e1-e882805fceb1\") " pod="default/test-pod-1" May 9 00:40:53.248271 kubelet[1760]: I0509 00:40:53.248270 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z2hj\" (UniqueName: \"kubernetes.io/projected/de76ce05-626d-4f0a-b8e1-e882805fceb1-kube-api-access-8z2hj\") pod \"test-pod-1\" (UID: \"de76ce05-626d-4f0a-b8e1-e882805fceb1\") " pod="default/test-pod-1" May 9 00:40:53.371964 kernel: FS-Cache: Loaded May 9 00:40:53.434887 kubelet[1760]: E0509 00:40:53.434842 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:53.441402 kernel: RPC: Registered named UNIX socket transport module. May 9 00:40:53.441440 kernel: RPC: Registered udp transport module. May 9 00:40:53.441466 kernel: RPC: Registered tcp transport module. May 9 00:40:53.442004 kernel: RPC: Registered tcp-with-tls transport module. May 9 00:40:53.443469 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 9 00:40:53.727170 kernel: NFS: Registering the id_resolver key type May 9 00:40:53.727270 kernel: Key type id_resolver registered May 9 00:40:53.727290 kernel: Key type id_legacy registered May 9 00:40:53.753471 nfsidmap[3176]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 9 00:40:53.758099 nfsidmap[3179]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 9 00:40:53.833201 containerd[1456]: time="2025-05-09T00:40:53.833161251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:de76ce05-626d-4f0a-b8e1-e882805fceb1,Namespace:default,Attempt:0,}" May 9 00:40:53.859823 systemd-networkd[1403]: lxc37948a7e3388: Link UP May 9 00:40:53.869958 kernel: eth0: renamed from tmp5c9e8 May 9 00:40:53.876460 systemd-networkd[1403]: lxc37948a7e3388: Gained carrier May 9 00:40:54.060550 containerd[1456]: time="2025-05-09T00:40:54.060269011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:40:54.060550 containerd[1456]: time="2025-05-09T00:40:54.060428253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:40:54.060550 containerd[1456]: time="2025-05-09T00:40:54.060457968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:40:54.060712 containerd[1456]: time="2025-05-09T00:40:54.060579238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:40:54.082058 systemd[1]: Started cri-containerd-5c9e8af3e15c8422a3067594ec803885d334c58b7433fc5dce8fa859becc15a8.scope - libcontainer container 5c9e8af3e15c8422a3067594ec803885d334c58b7433fc5dce8fa859becc15a8. May 9 00:40:54.092837 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:40:54.114988 containerd[1456]: time="2025-05-09T00:40:54.114901165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:de76ce05-626d-4f0a-b8e1-e882805fceb1,Namespace:default,Attempt:0,} returns sandbox id \"5c9e8af3e15c8422a3067594ec803885d334c58b7433fc5dce8fa859becc15a8\"" May 9 00:40:54.116295 containerd[1456]: time="2025-05-09T00:40:54.116259673Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 9 00:40:54.435569 kubelet[1760]: E0509 00:40:54.435536 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:54.478379 containerd[1456]: time="2025-05-09T00:40:54.478329186Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:40:54.479183 containerd[1456]: time="2025-05-09T00:40:54.479121433Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" May 9 00:40:54.481527 containerd[1456]: time="2025-05-09T00:40:54.481484950Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"73306098\" in 365.187657ms" May 9 00:40:54.481569 containerd[1456]: time="2025-05-09T00:40:54.481526949Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 9 00:40:54.483321 containerd[1456]: time="2025-05-09T00:40:54.483292045Z" level=info msg="CreateContainer within sandbox \"5c9e8af3e15c8422a3067594ec803885d334c58b7433fc5dce8fa859becc15a8\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 9 00:40:54.495251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4119410190.mount: Deactivated successfully. May 9 00:40:54.497668 containerd[1456]: time="2025-05-09T00:40:54.497636506Z" level=info msg="CreateContainer within sandbox \"5c9e8af3e15c8422a3067594ec803885d334c58b7433fc5dce8fa859becc15a8\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"927911d008d9faf4b9c25ce553cb365c0ce58ff50b0d9e38dfa8371f00ffc189\"" May 9 00:40:54.498048 containerd[1456]: time="2025-05-09T00:40:54.498018157Z" level=info msg="StartContainer for \"927911d008d9faf4b9c25ce553cb365c0ce58ff50b0d9e38dfa8371f00ffc189\"" May 9 00:40:54.527060 systemd[1]: Started cri-containerd-927911d008d9faf4b9c25ce553cb365c0ce58ff50b0d9e38dfa8371f00ffc189.scope - libcontainer container 927911d008d9faf4b9c25ce553cb365c0ce58ff50b0d9e38dfa8371f00ffc189. May 9 00:40:54.550346 containerd[1456]: time="2025-05-09T00:40:54.550306150Z" level=info msg="StartContainer for \"927911d008d9faf4b9c25ce553cb365c0ce58ff50b0d9e38dfa8371f00ffc189\" returns successfully" May 9 00:40:54.982370 kubelet[1760]: I0509 00:40:54.982317 1760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.615986995 podStartE2EDuration="16.982299767s" podCreationTimestamp="2025-05-09 00:40:38 +0000 UTC" firstStartedPulling="2025-05-09 00:40:54.115774305 +0000 UTC m=+45.164143216" lastFinishedPulling="2025-05-09 00:40:54.482087078 +0000 UTC m=+45.530455988" observedRunningTime="2025-05-09 00:40:54.98199962 +0000 UTC m=+46.030368530" watchObservedRunningTime="2025-05-09 00:40:54.982299767 +0000 UTC m=+46.030668677" May 9 00:40:55.436691 kubelet[1760]: E0509 00:40:55.436659 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:55.859127 systemd-networkd[1403]: lxc37948a7e3388: Gained IPv6LL May 9 00:40:56.436923 kubelet[1760]: E0509 00:40:56.436892 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:57.437354 kubelet[1760]: E0509 00:40:57.437327 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:58.438322 kubelet[1760]: E0509 00:40:58.438291 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:40:59.438609 kubelet[1760]: E0509 00:40:59.438575 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:41:00.439521 kubelet[1760]: E0509 00:41:00.439480 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:41:00.492110 containerd[1456]: time="2025-05-09T00:41:00.492054664Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 00:41:00.499655 containerd[1456]: time="2025-05-09T00:41:00.499617893Z" level=info msg="StopContainer for \"df8d9d16bb8db17cfd943867a387054ba5b0d36ca9fb77398a7e02b34fb654e8\" with timeout 2 (s)" May 9 00:41:00.499821 containerd[1456]: time="2025-05-09T00:41:00.499803594Z" level=info msg="Stop container \"df8d9d16bb8db17cfd943867a387054ba5b0d36ca9fb77398a7e02b34fb654e8\" with signal terminated" May 9 00:41:00.505817 systemd-networkd[1403]: lxc_health: Link DOWN May 9 00:41:00.505826 systemd-networkd[1403]: lxc_health: Lost carrier May 9 00:41:00.540704 systemd[1]: cri-containerd-df8d9d16bb8db17cfd943867a387054ba5b0d36ca9fb77398a7e02b34fb654e8.scope: Deactivated successfully. May 9 00:41:00.541101 systemd[1]: cri-containerd-df8d9d16bb8db17cfd943867a387054ba5b0d36ca9fb77398a7e02b34fb654e8.scope: Consumed 6.550s CPU time. May 9 00:41:00.559018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df8d9d16bb8db17cfd943867a387054ba5b0d36ca9fb77398a7e02b34fb654e8-rootfs.mount: Deactivated successfully. May 9 00:41:00.569574 containerd[1456]: time="2025-05-09T00:41:00.569509988Z" level=info msg="shim disconnected" id=df8d9d16bb8db17cfd943867a387054ba5b0d36ca9fb77398a7e02b34fb654e8 namespace=k8s.io May 9 00:41:00.569574 containerd[1456]: time="2025-05-09T00:41:00.569558299Z" level=warning msg="cleaning up after shim disconnected" id=df8d9d16bb8db17cfd943867a387054ba5b0d36ca9fb77398a7e02b34fb654e8 namespace=k8s.io May 9 00:41:00.569574 containerd[1456]: time="2025-05-09T00:41:00.569566415Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:41:00.585645 containerd[1456]: time="2025-05-09T00:41:00.585602959Z" level=info msg="StopContainer for \"df8d9d16bb8db17cfd943867a387054ba5b0d36ca9fb77398a7e02b34fb654e8\" returns successfully" May 9 00:41:00.586239 containerd[1456]: time="2025-05-09T00:41:00.586209593Z" level=info msg="StopPodSandbox for \"29b78281fba5bcb1c06544d313ebd59b9f76341d56557a98fee4bcc9865785d7\"" May 9 00:41:00.586282 containerd[1456]: time="2025-05-09T00:41:00.586247666Z" level=info msg="Container to stop \"b125ad104ede6a92db8e0c614180cc079cf63f7816a90ad7ea0abd274d82e478\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:41:00.586282 containerd[1456]: time="2025-05-09T00:41:00.586261982Z" level=info msg="Container to stop \"4b80ac3a4cbe46da0f33d3b66650824673a57714a7ec8a261475309ed56b7a97\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:41:00.586282 containerd[1456]: time="2025-05-09T00:41:00.586271841Z" level=info msg="Container to stop \"3ebcad3d505963e57963f45384f32a2629e2f2b8e365a4d0ada86c5899016900\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:41:00.586282 containerd[1456]: time="2025-05-09T00:41:00.586280758Z" level=info msg="Container to stop \"30b90fae09b23f19e1ce0e62a9659230953e0993e0c73f711e255f21462932d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:41:00.586410 containerd[1456]: time="2025-05-09T00:41:00.586290756Z" level=info msg="Container to stop \"df8d9d16bb8db17cfd943867a387054ba5b0d36ca9fb77398a7e02b34fb654e8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:41:00.588072 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-29b78281fba5bcb1c06544d313ebd59b9f76341d56557a98fee4bcc9865785d7-shm.mount: Deactivated successfully. May 9 00:41:00.592193 systemd[1]: cri-containerd-29b78281fba5bcb1c06544d313ebd59b9f76341d56557a98fee4bcc9865785d7.scope: Deactivated successfully. May 9 00:41:00.614157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29b78281fba5bcb1c06544d313ebd59b9f76341d56557a98fee4bcc9865785d7-rootfs.mount: Deactivated successfully. May 9 00:41:00.617472 containerd[1456]: time="2025-05-09T00:41:00.617416140Z" level=info msg="shim disconnected" id=29b78281fba5bcb1c06544d313ebd59b9f76341d56557a98fee4bcc9865785d7 namespace=k8s.io May 9 00:41:00.617472 containerd[1456]: time="2025-05-09T00:41:00.617462278Z" level=warning msg="cleaning up after shim disconnected" id=29b78281fba5bcb1c06544d313ebd59b9f76341d56557a98fee4bcc9865785d7 namespace=k8s.io May 9 00:41:00.617594 containerd[1456]: time="2025-05-09T00:41:00.617471725Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:41:00.631309 containerd[1456]: time="2025-05-09T00:41:00.631267144Z" level=info msg="TearDown network for sandbox \"29b78281fba5bcb1c06544d313ebd59b9f76341d56557a98fee4bcc9865785d7\" successfully" May 9 00:41:00.631309 containerd[1456]: time="2025-05-09T00:41:00.631304335Z" level=info msg="StopPodSandbox for \"29b78281fba5bcb1c06544d313ebd59b9f76341d56557a98fee4bcc9865785d7\" returns successfully" May 9 00:41:00.685153 kubelet[1760]: I0509 00:41:00.685115 1760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl8f6\" (UniqueName: \"kubernetes.io/projected/6ee37777-c366-4620-abd4-56b1094e3dff-kube-api-access-bl8f6\") pod \"6ee37777-c366-4620-abd4-56b1094e3dff\" (UID: \"6ee37777-c366-4620-abd4-56b1094e3dff\") " May 9 00:41:00.685153 kubelet[1760]: I0509 00:41:00.685151 1760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-host-proc-sys-net\") pod \"6ee37777-c366-4620-abd4-56b1094e3dff\" (UID: \"6ee37777-c366-4620-abd4-56b1094e3dff\") " May 9 00:41:00.685258 kubelet[1760]: I0509 00:41:00.685170 1760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6ee37777-c366-4620-abd4-56b1094e3dff-clustermesh-secrets\") pod \"6ee37777-c366-4620-abd4-56b1094e3dff\" (UID: \"6ee37777-c366-4620-abd4-56b1094e3dff\") " May 9 00:41:00.685258 kubelet[1760]: I0509 00:41:00.685189 1760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-bpf-maps\") pod \"6ee37777-c366-4620-abd4-56b1094e3dff\" (UID: \"6ee37777-c366-4620-abd4-56b1094e3dff\") " May 9 00:41:00.685258 kubelet[1760]: I0509 00:41:00.685204 1760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-cni-path\") pod \"6ee37777-c366-4620-abd4-56b1094e3dff\" (UID: \"6ee37777-c366-4620-abd4-56b1094e3dff\") " May 9 00:41:00.685258 kubelet[1760]: I0509 00:41:00.685220 1760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-cilium-run\") pod \"6ee37777-c366-4620-abd4-56b1094e3dff\" (UID: \"6ee37777-c366-4620-abd4-56b1094e3dff\") " May 9 00:41:00.685258 kubelet[1760]: I0509 00:41:00.685236 1760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-hostproc\") pod \"6ee37777-c366-4620-abd4-56b1094e3dff\" (UID: \"6ee37777-c366-4620-abd4-56b1094e3dff\") " May 9 00:41:00.685258 kubelet[1760]: I0509 00:41:00.685252 1760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6ee37777-c366-4620-abd4-56b1094e3dff-cilium-config-path\") pod \"6ee37777-c366-4620-abd4-56b1094e3dff\" (UID: \"6ee37777-c366-4620-abd4-56b1094e3dff\") " May 9 00:41:00.685410 kubelet[1760]: I0509 00:41:00.685238 1760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6ee37777-c366-4620-abd4-56b1094e3dff" (UID: "6ee37777-c366-4620-abd4-56b1094e3dff"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:41:00.685410 kubelet[1760]: I0509 00:41:00.685268 1760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-lib-modules\") pod \"6ee37777-c366-4620-abd4-56b1094e3dff\" (UID: \"6ee37777-c366-4620-abd4-56b1094e3dff\") " May 9 00:41:00.685410 kubelet[1760]: I0509 00:41:00.685299 1760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6ee37777-c366-4620-abd4-56b1094e3dff" (UID: "6ee37777-c366-4620-abd4-56b1094e3dff"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:41:00.685410 kubelet[1760]: I0509 00:41:00.685321 1760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-etc-cni-netd\") pod \"6ee37777-c366-4620-abd4-56b1094e3dff\" (UID: \"6ee37777-c366-4620-abd4-56b1094e3dff\") " May 9 00:41:00.685410 kubelet[1760]: I0509 00:41:00.685340 1760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-cilium-cgroup\") pod \"6ee37777-c366-4620-abd4-56b1094e3dff\" (UID: \"6ee37777-c366-4620-abd4-56b1094e3dff\") " May 9 00:41:00.685410 kubelet[1760]: I0509 00:41:00.685361 1760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6ee37777-c366-4620-abd4-56b1094e3dff-hubble-tls\") pod \"6ee37777-c366-4620-abd4-56b1094e3dff\" (UID: \"6ee37777-c366-4620-abd4-56b1094e3dff\") " May 9 00:41:00.685549 kubelet[1760]: I0509 00:41:00.685383 1760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-host-proc-sys-kernel\") pod \"6ee37777-c366-4620-abd4-56b1094e3dff\" (UID: \"6ee37777-c366-4620-abd4-56b1094e3dff\") " May 9 00:41:00.685549 kubelet[1760]: I0509 00:41:00.685399 1760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-xtables-lock\") pod \"6ee37777-c366-4620-abd4-56b1094e3dff\" (UID: \"6ee37777-c366-4620-abd4-56b1094e3dff\") " May 9 00:41:00.685549 kubelet[1760]: I0509 00:41:00.685434 1760 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-lib-modules\") on node \"10.0.0.160\" DevicePath \"\"" May 9 00:41:00.685549 kubelet[1760]: I0509 00:41:00.685444 1760 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-host-proc-sys-net\") on node \"10.0.0.160\" DevicePath \"\"" May 9 00:41:00.685549 kubelet[1760]: I0509 00:41:00.685466 1760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6ee37777-c366-4620-abd4-56b1094e3dff" (UID: "6ee37777-c366-4620-abd4-56b1094e3dff"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:41:00.685549 kubelet[1760]: I0509 00:41:00.685482 1760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6ee37777-c366-4620-abd4-56b1094e3dff" (UID: "6ee37777-c366-4620-abd4-56b1094e3dff"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:41:00.685684 kubelet[1760]: I0509 00:41:00.685497 1760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6ee37777-c366-4620-abd4-56b1094e3dff" (UID: "6ee37777-c366-4620-abd4-56b1094e3dff"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:41:00.687809 kubelet[1760]: I0509 00:41:00.685762 1760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6ee37777-c366-4620-abd4-56b1094e3dff" (UID: "6ee37777-c366-4620-abd4-56b1094e3dff"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:41:00.687809 kubelet[1760]: I0509 00:41:00.685799 1760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6ee37777-c366-4620-abd4-56b1094e3dff" (UID: "6ee37777-c366-4620-abd4-56b1094e3dff"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:41:00.687809 kubelet[1760]: I0509 00:41:00.685821 1760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6ee37777-c366-4620-abd4-56b1094e3dff" (UID: "6ee37777-c366-4620-abd4-56b1094e3dff"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:41:00.687809 kubelet[1760]: I0509 00:41:00.685835 1760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-cni-path" (OuterVolumeSpecName: "cni-path") pod "6ee37777-c366-4620-abd4-56b1094e3dff" (UID: "6ee37777-c366-4620-abd4-56b1094e3dff"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:41:00.687809 kubelet[1760]: I0509 00:41:00.685849 1760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-hostproc" (OuterVolumeSpecName: "hostproc") pod "6ee37777-c366-4620-abd4-56b1094e3dff" (UID: "6ee37777-c366-4620-abd4-56b1094e3dff"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:41:00.688783 kubelet[1760]: I0509 00:41:00.688753 1760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee37777-c366-4620-abd4-56b1094e3dff-kube-api-access-bl8f6" (OuterVolumeSpecName: "kube-api-access-bl8f6") pod "6ee37777-c366-4620-abd4-56b1094e3dff" (UID: "6ee37777-c366-4620-abd4-56b1094e3dff"). InnerVolumeSpecName "kube-api-access-bl8f6". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 9 00:41:00.688946 kubelet[1760]: I0509 00:41:00.688913 1760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee37777-c366-4620-abd4-56b1094e3dff-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6ee37777-c366-4620-abd4-56b1094e3dff" (UID: "6ee37777-c366-4620-abd4-56b1094e3dff"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 9 00:41:00.689333 systemd[1]: var-lib-kubelet-pods-6ee37777\x2dc366\x2d4620\x2dabd4\x2d56b1094e3dff-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbl8f6.mount: Deactivated successfully. May 9 00:41:00.689625 kubelet[1760]: I0509 00:41:00.689525 1760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee37777-c366-4620-abd4-56b1094e3dff-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6ee37777-c366-4620-abd4-56b1094e3dff" (UID: "6ee37777-c366-4620-abd4-56b1094e3dff"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 9 00:41:00.690069 kubelet[1760]: I0509 00:41:00.690046 1760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ee37777-c366-4620-abd4-56b1094e3dff-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6ee37777-c366-4620-abd4-56b1094e3dff" (UID: "6ee37777-c366-4620-abd4-56b1094e3dff"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 9 00:41:00.785906 kubelet[1760]: I0509 00:41:00.785874 1760 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-cni-path\") on node \"10.0.0.160\" DevicePath \"\"" May 9 00:41:00.785906 kubelet[1760]: I0509 00:41:00.785895 1760 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bl8f6\" (UniqueName: \"kubernetes.io/projected/6ee37777-c366-4620-abd4-56b1094e3dff-kube-api-access-bl8f6\") on node \"10.0.0.160\" DevicePath \"\"" May 9 00:41:00.785906 kubelet[1760]: I0509 00:41:00.785905 1760 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6ee37777-c366-4620-abd4-56b1094e3dff-clustermesh-secrets\") on node \"10.0.0.160\" DevicePath \"\"" May 9 00:41:00.786034 kubelet[1760]: I0509 00:41:00.785912 1760 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-bpf-maps\") on node \"10.0.0.160\" DevicePath \"\"" May 9 00:41:00.786034 kubelet[1760]: I0509 00:41:00.785955 1760 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-cilium-run\") on node \"10.0.0.160\" DevicePath \"\"" May 9 00:41:00.786034 kubelet[1760]: I0509 00:41:00.785964 1760 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-hostproc\") on node \"10.0.0.160\" DevicePath \"\"" May 9 00:41:00.786034 kubelet[1760]: I0509 00:41:00.785971 1760 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6ee37777-c366-4620-abd4-56b1094e3dff-cilium-config-path\") on node \"10.0.0.160\" DevicePath \"\"" May 9 00:41:00.786034 kubelet[1760]: I0509 00:41:00.786003 1760 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-etc-cni-netd\") on node \"10.0.0.160\" DevicePath \"\"" May 9 00:41:00.786034 kubelet[1760]: I0509 00:41:00.786011 1760 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-cilium-cgroup\") on node \"10.0.0.160\" DevicePath \"\"" May 9 00:41:00.786034 kubelet[1760]: I0509 00:41:00.786019 1760 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6ee37777-c366-4620-abd4-56b1094e3dff-hubble-tls\") on node \"10.0.0.160\" DevicePath \"\"" May 9 00:41:00.786034 kubelet[1760]: I0509 00:41:00.786026 1760 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-host-proc-sys-kernel\") on node \"10.0.0.160\" DevicePath \"\"" May 9 00:41:00.786206 kubelet[1760]: I0509 00:41:00.786034 1760 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ee37777-c366-4620-abd4-56b1094e3dff-xtables-lock\") on node \"10.0.0.160\" DevicePath \"\"" May 9 00:41:00.984336 kubelet[1760]: I0509 00:41:00.984267 1760 scope.go:117] "RemoveContainer" containerID="df8d9d16bb8db17cfd943867a387054ba5b0d36ca9fb77398a7e02b34fb654e8" May 9 00:41:00.985520 containerd[1456]: time="2025-05-09T00:41:00.985487140Z" level=info msg="RemoveContainer for \"df8d9d16bb8db17cfd943867a387054ba5b0d36ca9fb77398a7e02b34fb654e8\"" May 9 00:41:00.988805 containerd[1456]: time="2025-05-09T00:41:00.988749081Z" level=info msg="RemoveContainer for \"df8d9d16bb8db17cfd943867a387054ba5b0d36ca9fb77398a7e02b34fb654e8\" returns successfully" May 9 00:41:00.989003 kubelet[1760]: I0509 00:41:00.988944 1760 scope.go:117] "RemoveContainer" containerID="30b90fae09b23f19e1ce0e62a9659230953e0993e0c73f711e255f21462932d1" May 9 00:41:00.989445 systemd[1]: Removed slice kubepods-burstable-pod6ee37777_c366_4620_abd4_56b1094e3dff.slice - libcontainer container kubepods-burstable-pod6ee37777_c366_4620_abd4_56b1094e3dff.slice. May 9 00:41:00.989647 systemd[1]: kubepods-burstable-pod6ee37777_c366_4620_abd4_56b1094e3dff.slice: Consumed 6.642s CPU time. May 9 00:41:00.990162 containerd[1456]: time="2025-05-09T00:41:00.989710233Z" level=info msg="RemoveContainer for \"30b90fae09b23f19e1ce0e62a9659230953e0993e0c73f711e255f21462932d1\"" May 9 00:41:00.992831 containerd[1456]: time="2025-05-09T00:41:00.992793696Z" level=info msg="RemoveContainer for \"30b90fae09b23f19e1ce0e62a9659230953e0993e0c73f711e255f21462932d1\" returns successfully" May 9 00:41:00.993040 kubelet[1760]: I0509 00:41:00.992923 1760 scope.go:117] "RemoveContainer" containerID="3ebcad3d505963e57963f45384f32a2629e2f2b8e365a4d0ada86c5899016900" May 9 00:41:00.993723 containerd[1456]: time="2025-05-09T00:41:00.993680118Z" level=info msg="RemoveContainer for \"3ebcad3d505963e57963f45384f32a2629e2f2b8e365a4d0ada86c5899016900\"" May 9 00:41:00.996805 containerd[1456]: time="2025-05-09T00:41:00.996769413Z" level=info msg="RemoveContainer for \"3ebcad3d505963e57963f45384f32a2629e2f2b8e365a4d0ada86c5899016900\" returns successfully" May 9 00:41:00.996938 kubelet[1760]: I0509 00:41:00.996908 1760 scope.go:117] "RemoveContainer" containerID="b125ad104ede6a92db8e0c614180cc079cf63f7816a90ad7ea0abd274d82e478" May 9 00:41:00.997753 containerd[1456]: time="2025-05-09T00:41:00.997727218Z" level=info msg="RemoveContainer for \"b125ad104ede6a92db8e0c614180cc079cf63f7816a90ad7ea0abd274d82e478\"" May 9 00:41:01.000945 containerd[1456]: time="2025-05-09T00:41:01.000894841Z" level=info msg="RemoveContainer for \"b125ad104ede6a92db8e0c614180cc079cf63f7816a90ad7ea0abd274d82e478\" returns successfully" May 9 00:41:01.001075 kubelet[1760]: I0509 00:41:01.001057 1760 scope.go:117] "RemoveContainer" containerID="4b80ac3a4cbe46da0f33d3b66650824673a57714a7ec8a261475309ed56b7a97" May 9 00:41:01.001869 containerd[1456]: time="2025-05-09T00:41:01.001843368Z" level=info msg="RemoveContainer for \"4b80ac3a4cbe46da0f33d3b66650824673a57714a7ec8a261475309ed56b7a97\"" May 9 00:41:01.004687 containerd[1456]: time="2025-05-09T00:41:01.004660257Z" level=info msg="RemoveContainer for \"4b80ac3a4cbe46da0f33d3b66650824673a57714a7ec8a261475309ed56b7a97\" returns successfully" May 9 00:41:01.004836 kubelet[1760]: I0509 00:41:01.004808 1760 scope.go:117] "RemoveContainer" containerID="df8d9d16bb8db17cfd943867a387054ba5b0d36ca9fb77398a7e02b34fb654e8" May 9 00:41:01.005032 containerd[1456]: time="2025-05-09T00:41:01.004996992Z" level=error msg="ContainerStatus for \"df8d9d16bb8db17cfd943867a387054ba5b0d36ca9fb77398a7e02b34fb654e8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"df8d9d16bb8db17cfd943867a387054ba5b0d36ca9fb77398a7e02b34fb654e8\": not found" May 9 00:41:01.005131 kubelet[1760]: E0509 00:41:01.005108 1760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"df8d9d16bb8db17cfd943867a387054ba5b0d36ca9fb77398a7e02b34fb654e8\": not found" containerID="df8d9d16bb8db17cfd943867a387054ba5b0d36ca9fb77398a7e02b34fb654e8" May 9 00:41:01.005177 kubelet[1760]: I0509 00:41:01.005134 1760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"df8d9d16bb8db17cfd943867a387054ba5b0d36ca9fb77398a7e02b34fb654e8"} err="failed to get container status \"df8d9d16bb8db17cfd943867a387054ba5b0d36ca9fb77398a7e02b34fb654e8\": rpc error: code = NotFound desc = an error occurred when try to find container \"df8d9d16bb8db17cfd943867a387054ba5b0d36ca9fb77398a7e02b34fb654e8\": not found" May 9 00:41:01.005177 kubelet[1760]: I0509 00:41:01.005168 1760 scope.go:117] "RemoveContainer" containerID="30b90fae09b23f19e1ce0e62a9659230953e0993e0c73f711e255f21462932d1" May 9 00:41:01.005330 containerd[1456]: time="2025-05-09T00:41:01.005310583Z" level=error msg="ContainerStatus for \"30b90fae09b23f19e1ce0e62a9659230953e0993e0c73f711e255f21462932d1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"30b90fae09b23f19e1ce0e62a9659230953e0993e0c73f711e255f21462932d1\": not found" May 9 00:41:01.005427 kubelet[1760]: E0509 00:41:01.005405 1760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"30b90fae09b23f19e1ce0e62a9659230953e0993e0c73f711e255f21462932d1\": not found" containerID="30b90fae09b23f19e1ce0e62a9659230953e0993e0c73f711e255f21462932d1" May 9 00:41:01.005457 kubelet[1760]: I0509 00:41:01.005424 1760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"30b90fae09b23f19e1ce0e62a9659230953e0993e0c73f711e255f21462932d1"} err="failed to get container status \"30b90fae09b23f19e1ce0e62a9659230953e0993e0c73f711e255f21462932d1\": rpc error: code = NotFound desc = an error occurred when try to find container \"30b90fae09b23f19e1ce0e62a9659230953e0993e0c73f711e255f21462932d1\": not found" May 9 00:41:01.005457 kubelet[1760]: I0509 00:41:01.005436 1760 scope.go:117] "RemoveContainer" containerID="3ebcad3d505963e57963f45384f32a2629e2f2b8e365a4d0ada86c5899016900" May 9 00:41:01.005607 containerd[1456]: time="2025-05-09T00:41:01.005574140Z" level=error msg="ContainerStatus for \"3ebcad3d505963e57963f45384f32a2629e2f2b8e365a4d0ada86c5899016900\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3ebcad3d505963e57963f45384f32a2629e2f2b8e365a4d0ada86c5899016900\": not found" May 9 00:41:01.005741 kubelet[1760]: E0509 00:41:01.005679 1760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3ebcad3d505963e57963f45384f32a2629e2f2b8e365a4d0ada86c5899016900\": not found" containerID="3ebcad3d505963e57963f45384f32a2629e2f2b8e365a4d0ada86c5899016900" May 9 00:41:01.005741 kubelet[1760]: I0509 00:41:01.005703 1760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3ebcad3d505963e57963f45384f32a2629e2f2b8e365a4d0ada86c5899016900"} err="failed to get container status \"3ebcad3d505963e57963f45384f32a2629e2f2b8e365a4d0ada86c5899016900\": rpc error: code = NotFound desc = an error occurred when try to find container \"3ebcad3d505963e57963f45384f32a2629e2f2b8e365a4d0ada86c5899016900\": not found" May 9 00:41:01.005741 kubelet[1760]: I0509 00:41:01.005721 1760 scope.go:117] "RemoveContainer" containerID="b125ad104ede6a92db8e0c614180cc079cf63f7816a90ad7ea0abd274d82e478" May 9 00:41:01.005876 containerd[1456]: time="2025-05-09T00:41:01.005846744Z" level=error msg="ContainerStatus for \"b125ad104ede6a92db8e0c614180cc079cf63f7816a90ad7ea0abd274d82e478\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b125ad104ede6a92db8e0c614180cc079cf63f7816a90ad7ea0abd274d82e478\": not found" May 9 00:41:01.005989 kubelet[1760]: E0509 00:41:01.005961 1760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b125ad104ede6a92db8e0c614180cc079cf63f7816a90ad7ea0abd274d82e478\": not found" containerID="b125ad104ede6a92db8e0c614180cc079cf63f7816a90ad7ea0abd274d82e478" May 9 00:41:01.006021 kubelet[1760]: I0509 00:41:01.005999 1760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b125ad104ede6a92db8e0c614180cc079cf63f7816a90ad7ea0abd274d82e478"} err="failed to get container status \"b125ad104ede6a92db8e0c614180cc079cf63f7816a90ad7ea0abd274d82e478\": rpc error: code = NotFound desc = an error occurred when try to find container \"b125ad104ede6a92db8e0c614180cc079cf63f7816a90ad7ea0abd274d82e478\": not found" May 9 00:41:01.006062 kubelet[1760]: I0509 00:41:01.006024 1760 scope.go:117] "RemoveContainer" containerID="4b80ac3a4cbe46da0f33d3b66650824673a57714a7ec8a261475309ed56b7a97" May 9 00:41:01.006251 containerd[1456]: time="2025-05-09T00:41:01.006218043Z" level=error msg="ContainerStatus for \"4b80ac3a4cbe46da0f33d3b66650824673a57714a7ec8a261475309ed56b7a97\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4b80ac3a4cbe46da0f33d3b66650824673a57714a7ec8a261475309ed56b7a97\": not found" May 9 00:41:01.006349 kubelet[1760]: E0509 00:41:01.006328 1760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4b80ac3a4cbe46da0f33d3b66650824673a57714a7ec8a261475309ed56b7a97\": not found" containerID="4b80ac3a4cbe46da0f33d3b66650824673a57714a7ec8a261475309ed56b7a97" May 9 00:41:01.006404 kubelet[1760]: I0509 00:41:01.006350 1760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4b80ac3a4cbe46da0f33d3b66650824673a57714a7ec8a261475309ed56b7a97"} err="failed to get container status \"4b80ac3a4cbe46da0f33d3b66650824673a57714a7ec8a261475309ed56b7a97\": rpc error: code = NotFound desc = an error occurred when try to find container \"4b80ac3a4cbe46da0f33d3b66650824673a57714a7ec8a261475309ed56b7a97\": not found" May 9 00:41:01.440344 kubelet[1760]: E0509 00:41:01.440313 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:41:01.472540 systemd[1]: var-lib-kubelet-pods-6ee37777\x2dc366\x2d4620\x2dabd4\x2d56b1094e3dff-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 9 00:41:01.472645 systemd[1]: var-lib-kubelet-pods-6ee37777\x2dc366\x2d4620\x2dabd4\x2d56b1094e3dff-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 9 00:41:01.894098 kubelet[1760]: I0509 00:41:01.894059 1760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee37777-c366-4620-abd4-56b1094e3dff" path="/var/lib/kubelet/pods/6ee37777-c366-4620-abd4-56b1094e3dff/volumes" May 9 00:41:02.441233 kubelet[1760]: E0509 00:41:02.441197 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:41:03.084677 kubelet[1760]: I0509 00:41:03.084631 1760 memory_manager.go:355] "RemoveStaleState removing state" podUID="6ee37777-c366-4620-abd4-56b1094e3dff" containerName="cilium-agent" May 9 00:41:03.089060 kubelet[1760]: W0509 00:41:03.089023 1760 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.0.0.160" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.160' and this object May 9 00:41:03.089060 kubelet[1760]: I0509 00:41:03.089042 1760 status_manager.go:890] "Failed to get status for pod" podUID="341a9111-3a98-4358-aa82-a9ed872807bc" pod="kube-system/cilium-operator-6c4d7847fc-4pv2t" err="pods \"cilium-operator-6c4d7847fc-4pv2t\" is forbidden: User \"system:node:10.0.0.160\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.160' and this object" May 9 00:41:03.089762 kubelet[1760]: E0509 00:41:03.089052 1760 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:10.0.0.160\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.160' and this object" logger="UnhandledError" May 9 00:41:03.089617 systemd[1]: Created slice kubepods-besteffort-pod341a9111_3a98_4358_aa82_a9ed872807bc.slice - libcontainer container kubepods-besteffort-pod341a9111_3a98_4358_aa82_a9ed872807bc.slice. May 9 00:41:03.102869 systemd[1]: Created slice kubepods-burstable-pod2e6b2572_1a41_4c2c_8272_e304fe6b6bcb.slice - libcontainer container kubepods-burstable-pod2e6b2572_1a41_4c2c_8272_e304fe6b6bcb.slice. May 9 00:41:03.197818 kubelet[1760]: I0509 00:41:03.197770 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e6b2572-1a41-4c2c-8272-e304fe6b6bcb-etc-cni-netd\") pod \"cilium-87qls\" (UID: \"2e6b2572-1a41-4c2c-8272-e304fe6b6bcb\") " pod="kube-system/cilium-87qls" May 9 00:41:03.197818 kubelet[1760]: I0509 00:41:03.197822 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e6b2572-1a41-4c2c-8272-e304fe6b6bcb-xtables-lock\") pod \"cilium-87qls\" (UID: \"2e6b2572-1a41-4c2c-8272-e304fe6b6bcb\") " pod="kube-system/cilium-87qls" May 9 00:41:03.197950 kubelet[1760]: I0509 00:41:03.197840 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2e6b2572-1a41-4c2c-8272-e304fe6b6bcb-hubble-tls\") pod \"cilium-87qls\" (UID: \"2e6b2572-1a41-4c2c-8272-e304fe6b6bcb\") " pod="kube-system/cilium-87qls" May 9 00:41:03.197950 kubelet[1760]: I0509 00:41:03.197859 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/341a9111-3a98-4358-aa82-a9ed872807bc-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-4pv2t\" (UID: \"341a9111-3a98-4358-aa82-a9ed872807bc\") " pod="kube-system/cilium-operator-6c4d7847fc-4pv2t" May 9 00:41:03.197950 kubelet[1760]: I0509 00:41:03.197880 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2e6b2572-1a41-4c2c-8272-e304fe6b6bcb-cilium-run\") pod \"cilium-87qls\" (UID: \"2e6b2572-1a41-4c2c-8272-e304fe6b6bcb\") " pod="kube-system/cilium-87qls" May 9 00:41:03.197950 kubelet[1760]: I0509 00:41:03.197896 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2e6b2572-1a41-4c2c-8272-e304fe6b6bcb-cni-path\") pod \"cilium-87qls\" (UID: \"2e6b2572-1a41-4c2c-8272-e304fe6b6bcb\") " pod="kube-system/cilium-87qls" May 9 00:41:03.197950 kubelet[1760]: I0509 00:41:03.197909 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2e6b2572-1a41-4c2c-8272-e304fe6b6bcb-host-proc-sys-net\") pod \"cilium-87qls\" (UID: \"2e6b2572-1a41-4c2c-8272-e304fe6b6bcb\") " pod="kube-system/cilium-87qls" May 9 00:41:03.198064 kubelet[1760]: I0509 00:41:03.197946 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5bpd\" (UniqueName: \"kubernetes.io/projected/2e6b2572-1a41-4c2c-8272-e304fe6b6bcb-kube-api-access-j5bpd\") pod \"cilium-87qls\" (UID: \"2e6b2572-1a41-4c2c-8272-e304fe6b6bcb\") " pod="kube-system/cilium-87qls" May 9 00:41:03.198064 kubelet[1760]: I0509 00:41:03.197965 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2e6b2572-1a41-4c2c-8272-e304fe6b6bcb-cilium-cgroup\") pod \"cilium-87qls\" (UID: \"2e6b2572-1a41-4c2c-8272-e304fe6b6bcb\") " pod="kube-system/cilium-87qls" May 9 00:41:03.198064 kubelet[1760]: I0509 00:41:03.197982 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2e6b2572-1a41-4c2c-8272-e304fe6b6bcb-cilium-ipsec-secrets\") pod \"cilium-87qls\" (UID: \"2e6b2572-1a41-4c2c-8272-e304fe6b6bcb\") " pod="kube-system/cilium-87qls" May 9 00:41:03.198064 kubelet[1760]: I0509 00:41:03.198000 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e6b2572-1a41-4c2c-8272-e304fe6b6bcb-lib-modules\") pod \"cilium-87qls\" (UID: \"2e6b2572-1a41-4c2c-8272-e304fe6b6bcb\") " pod="kube-system/cilium-87qls" May 9 00:41:03.198064 kubelet[1760]: I0509 00:41:03.198014 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2e6b2572-1a41-4c2c-8272-e304fe6b6bcb-host-proc-sys-kernel\") pod \"cilium-87qls\" (UID: \"2e6b2572-1a41-4c2c-8272-e304fe6b6bcb\") " pod="kube-system/cilium-87qls" May 9 00:41:03.198177 kubelet[1760]: I0509 00:41:03.198028 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2e6b2572-1a41-4c2c-8272-e304fe6b6bcb-clustermesh-secrets\") pod \"cilium-87qls\" (UID: \"2e6b2572-1a41-4c2c-8272-e304fe6b6bcb\") " pod="kube-system/cilium-87qls" May 9 00:41:03.198177 kubelet[1760]: I0509 00:41:03.198047 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2e6b2572-1a41-4c2c-8272-e304fe6b6bcb-bpf-maps\") pod \"cilium-87qls\" (UID: \"2e6b2572-1a41-4c2c-8272-e304fe6b6bcb\") " pod="kube-system/cilium-87qls" May 9 00:41:03.198177 kubelet[1760]: I0509 00:41:03.198090 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2e6b2572-1a41-4c2c-8272-e304fe6b6bcb-hostproc\") pod \"cilium-87qls\" (UID: \"2e6b2572-1a41-4c2c-8272-e304fe6b6bcb\") " pod="kube-system/cilium-87qls" May 9 00:41:03.198177 kubelet[1760]: I0509 00:41:03.198129 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fbmx\" (UniqueName: \"kubernetes.io/projected/341a9111-3a98-4358-aa82-a9ed872807bc-kube-api-access-4fbmx\") pod \"cilium-operator-6c4d7847fc-4pv2t\" (UID: \"341a9111-3a98-4358-aa82-a9ed872807bc\") " pod="kube-system/cilium-operator-6c4d7847fc-4pv2t" May 9 00:41:03.198177 kubelet[1760]: I0509 00:41:03.198148 1760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2e6b2572-1a41-4c2c-8272-e304fe6b6bcb-cilium-config-path\") pod \"cilium-87qls\" (UID: \"2e6b2572-1a41-4c2c-8272-e304fe6b6bcb\") " pod="kube-system/cilium-87qls" May 9 00:41:03.441693 kubelet[1760]: E0509 00:41:03.441587 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:41:04.292035 kubelet[1760]: E0509 00:41:04.291988 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:41:04.292369 containerd[1456]: time="2025-05-09T00:41:04.292325641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4pv2t,Uid:341a9111-3a98-4358-aa82-a9ed872807bc,Namespace:kube-system,Attempt:0,}" May 9 00:41:04.312317 containerd[1456]: time="2025-05-09T00:41:04.311620186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:41:04.312317 containerd[1456]: time="2025-05-09T00:41:04.312281411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:41:04.312317 containerd[1456]: time="2025-05-09T00:41:04.312294186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:41:04.312469 containerd[1456]: time="2025-05-09T00:41:04.312388242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:41:04.314324 kubelet[1760]: E0509 00:41:04.314272 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:41:04.316309 containerd[1456]: time="2025-05-09T00:41:04.316262839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-87qls,Uid:2e6b2572-1a41-4c2c-8272-e304fe6b6bcb,Namespace:kube-system,Attempt:0,}" May 9 00:41:04.335505 containerd[1456]: time="2025-05-09T00:41:04.335412051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:41:04.335580 containerd[1456]: time="2025-05-09T00:41:04.335486561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:41:04.335580 containerd[1456]: time="2025-05-09T00:41:04.335545292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:41:04.335677 containerd[1456]: time="2025-05-09T00:41:04.335645601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:41:04.340082 systemd[1]: Started cri-containerd-ab1fbb7c0e1876e46b472a3242b9a56f0142b252937a9560cb7c5f3fd85fcf6d.scope - libcontainer container ab1fbb7c0e1876e46b472a3242b9a56f0142b252937a9560cb7c5f3fd85fcf6d. May 9 00:41:04.363056 systemd[1]: Started cri-containerd-f5cfacb8b47e49564eda28f6c43edfb8e580846cd37589b3777dec9e7804fc57.scope - libcontainer container f5cfacb8b47e49564eda28f6c43edfb8e580846cd37589b3777dec9e7804fc57. May 9 00:41:04.383227 containerd[1456]: time="2025-05-09T00:41:04.383140112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-87qls,Uid:2e6b2572-1a41-4c2c-8272-e304fe6b6bcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5cfacb8b47e49564eda28f6c43edfb8e580846cd37589b3777dec9e7804fc57\"" May 9 00:41:04.385332 kubelet[1760]: E0509 00:41:04.385123 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:41:04.387084 containerd[1456]: time="2025-05-09T00:41:04.387040297Z" level=info msg="CreateContainer within sandbox \"f5cfacb8b47e49564eda28f6c43edfb8e580846cd37589b3777dec9e7804fc57\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 00:41:04.394315 containerd[1456]: time="2025-05-09T00:41:04.394189931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4pv2t,Uid:341a9111-3a98-4358-aa82-a9ed872807bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab1fbb7c0e1876e46b472a3242b9a56f0142b252937a9560cb7c5f3fd85fcf6d\"" May 9 00:41:04.394751 kubelet[1760]: E0509 00:41:04.394717 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:41:04.395582 containerd[1456]: time="2025-05-09T00:41:04.395547577Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 9 00:41:04.402264 containerd[1456]: time="2025-05-09T00:41:04.402229852Z" level=info msg="CreateContainer within sandbox \"f5cfacb8b47e49564eda28f6c43edfb8e580846cd37589b3777dec9e7804fc57\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c1176978b8bac7b7dd3b73e834806ba0c2597f29157f47867ea617f8d5b98b91\"" May 9 00:41:04.402557 containerd[1456]: time="2025-05-09T00:41:04.402535167Z" level=info msg="StartContainer for \"c1176978b8bac7b7dd3b73e834806ba0c2597f29157f47867ea617f8d5b98b91\"" May 9 00:41:04.430066 systemd[1]: Started cri-containerd-c1176978b8bac7b7dd3b73e834806ba0c2597f29157f47867ea617f8d5b98b91.scope - libcontainer container c1176978b8bac7b7dd3b73e834806ba0c2597f29157f47867ea617f8d5b98b91. May 9 00:41:04.442716 kubelet[1760]: E0509 00:41:04.442678 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:41:04.453062 containerd[1456]: time="2025-05-09T00:41:04.453022835Z" level=info msg="StartContainer for \"c1176978b8bac7b7dd3b73e834806ba0c2597f29157f47867ea617f8d5b98b91\" returns successfully" May 9 00:41:04.462051 systemd[1]: cri-containerd-c1176978b8bac7b7dd3b73e834806ba0c2597f29157f47867ea617f8d5b98b91.scope: Deactivated successfully. May 9 00:41:04.491820 containerd[1456]: time="2025-05-09T00:41:04.491744946Z" level=info msg="shim disconnected" id=c1176978b8bac7b7dd3b73e834806ba0c2597f29157f47867ea617f8d5b98b91 namespace=k8s.io May 9 00:41:04.491820 containerd[1456]: time="2025-05-09T00:41:04.491797024Z" level=warning msg="cleaning up after shim disconnected" id=c1176978b8bac7b7dd3b73e834806ba0c2597f29157f47867ea617f8d5b98b91 namespace=k8s.io May 9 00:41:04.491820 containerd[1456]: time="2025-05-09T00:41:04.491805941Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:41:04.904230 kubelet[1760]: E0509 00:41:04.904190 1760 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 9 00:41:04.994586 kubelet[1760]: E0509 00:41:04.994554 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:41:04.996065 containerd[1456]: time="2025-05-09T00:41:04.996034249Z" level=info msg="CreateContainer within sandbox \"f5cfacb8b47e49564eda28f6c43edfb8e580846cd37589b3777dec9e7804fc57\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 00:41:05.007266 containerd[1456]: time="2025-05-09T00:41:05.007223458Z" level=info msg="CreateContainer within sandbox \"f5cfacb8b47e49564eda28f6c43edfb8e580846cd37589b3777dec9e7804fc57\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dbb8f9045c188c29893e2699554e26ba4485acbfa8f10e03eb362cc89403e349\"" May 9 00:41:05.007684 containerd[1456]: time="2025-05-09T00:41:05.007633650Z" level=info msg="StartContainer for \"dbb8f9045c188c29893e2699554e26ba4485acbfa8f10e03eb362cc89403e349\"" May 9 00:41:05.036059 systemd[1]: Started cri-containerd-dbb8f9045c188c29893e2699554e26ba4485acbfa8f10e03eb362cc89403e349.scope - libcontainer container dbb8f9045c188c29893e2699554e26ba4485acbfa8f10e03eb362cc89403e349. May 9 00:41:05.060538 containerd[1456]: time="2025-05-09T00:41:05.060501617Z" level=info msg="StartContainer for \"dbb8f9045c188c29893e2699554e26ba4485acbfa8f10e03eb362cc89403e349\" returns successfully" May 9 00:41:05.067081 systemd[1]: cri-containerd-dbb8f9045c188c29893e2699554e26ba4485acbfa8f10e03eb362cc89403e349.scope: Deactivated successfully. May 9 00:41:05.087687 containerd[1456]: time="2025-05-09T00:41:05.087633329Z" level=info msg="shim disconnected" id=dbb8f9045c188c29893e2699554e26ba4485acbfa8f10e03eb362cc89403e349 namespace=k8s.io May 9 00:41:05.087687 containerd[1456]: time="2025-05-09T00:41:05.087684925Z" level=warning msg="cleaning up after shim disconnected" id=dbb8f9045c188c29893e2699554e26ba4485acbfa8f10e03eb362cc89403e349 namespace=k8s.io May 9 00:41:05.087862 containerd[1456]: time="2025-05-09T00:41:05.087694032Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:41:05.315426 systemd[1]: run-containerd-runc-k8s.io-f5cfacb8b47e49564eda28f6c43edfb8e580846cd37589b3777dec9e7804fc57-runc.aEEdQI.mount: Deactivated successfully. May 9 00:41:05.442833 kubelet[1760]: E0509 00:41:05.442782 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:41:05.771503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2835329809.mount: Deactivated successfully. May 9 00:41:05.999357 kubelet[1760]: E0509 00:41:05.999320 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:41:06.001135 containerd[1456]: time="2025-05-09T00:41:06.001101630Z" level=info msg="CreateContainer within sandbox \"f5cfacb8b47e49564eda28f6c43edfb8e580846cd37589b3777dec9e7804fc57\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 00:41:06.019947 containerd[1456]: time="2025-05-09T00:41:06.019892922Z" level=info msg="CreateContainer within sandbox \"f5cfacb8b47e49564eda28f6c43edfb8e580846cd37589b3777dec9e7804fc57\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"315932b803f6e18aae8514607e51465ee91c46b84400b6f7cdc1369e13c86957\"" May 9 00:41:06.020586 containerd[1456]: time="2025-05-09T00:41:06.020546211Z" level=info msg="StartContainer for \"315932b803f6e18aae8514607e51465ee91c46b84400b6f7cdc1369e13c86957\"" May 9 00:41:06.050069 systemd[1]: Started cri-containerd-315932b803f6e18aae8514607e51465ee91c46b84400b6f7cdc1369e13c86957.scope - libcontainer container 315932b803f6e18aae8514607e51465ee91c46b84400b6f7cdc1369e13c86957. May 9 00:41:06.077570 containerd[1456]: time="2025-05-09T00:41:06.077535202Z" level=info msg="StartContainer for \"315932b803f6e18aae8514607e51465ee91c46b84400b6f7cdc1369e13c86957\" returns successfully" May 9 00:41:06.079572 systemd[1]: cri-containerd-315932b803f6e18aae8514607e51465ee91c46b84400b6f7cdc1369e13c86957.scope: Deactivated successfully. May 9 00:41:06.184618 containerd[1456]: time="2025-05-09T00:41:06.184555604Z" level=info msg="shim disconnected" id=315932b803f6e18aae8514607e51465ee91c46b84400b6f7cdc1369e13c86957 namespace=k8s.io May 9 00:41:06.184618 containerd[1456]: time="2025-05-09T00:41:06.184608234Z" level=warning msg="cleaning up after shim disconnected" id=315932b803f6e18aae8514607e51465ee91c46b84400b6f7cdc1369e13c86957 namespace=k8s.io May 9 00:41:06.184618 containerd[1456]: time="2025-05-09T00:41:06.184617281Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:41:06.262172 containerd[1456]: time="2025-05-09T00:41:06.262134319Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:41:06.262858 containerd[1456]: time="2025-05-09T00:41:06.262794673Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 9 00:41:06.263898 containerd[1456]: time="2025-05-09T00:41:06.263853646Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:41:06.265306 containerd[1456]: time="2025-05-09T00:41:06.265272677Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.869680666s" May 9 00:41:06.265345 containerd[1456]: time="2025-05-09T00:41:06.265306390Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 9 00:41:06.267292 containerd[1456]: time="2025-05-09T00:41:06.267257984Z" level=info msg="CreateContainer within sandbox \"ab1fbb7c0e1876e46b472a3242b9a56f0142b252937a9560cb7c5f3fd85fcf6d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 9 00:41:06.278144 containerd[1456]: time="2025-05-09T00:41:06.278109821Z" level=info msg="CreateContainer within sandbox \"ab1fbb7c0e1876e46b472a3242b9a56f0142b252937a9560cb7c5f3fd85fcf6d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"53c52388c996388f5dbce714e55d438dc86131283b82506fbfd673f0b17663b5\"" May 9 00:41:06.278507 containerd[1456]: time="2025-05-09T00:41:06.278476871Z" level=info msg="StartContainer for \"53c52388c996388f5dbce714e55d438dc86131283b82506fbfd673f0b17663b5\"" May 9 00:41:06.304054 systemd[1]: Started cri-containerd-53c52388c996388f5dbce714e55d438dc86131283b82506fbfd673f0b17663b5.scope - libcontainer container 53c52388c996388f5dbce714e55d438dc86131283b82506fbfd673f0b17663b5. May 9 00:41:06.328064 containerd[1456]: time="2025-05-09T00:41:06.328017060Z" level=info msg="StartContainer for \"53c52388c996388f5dbce714e55d438dc86131283b82506fbfd673f0b17663b5\" returns successfully" May 9 00:41:06.443356 kubelet[1760]: E0509 00:41:06.443316 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:41:07.002614 kubelet[1760]: E0509 00:41:07.002581 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:41:07.003938 kubelet[1760]: E0509 00:41:07.003900 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:41:07.004405 containerd[1456]: time="2025-05-09T00:41:07.004365101Z" level=info msg="CreateContainer within sandbox \"f5cfacb8b47e49564eda28f6c43edfb8e580846cd37589b3777dec9e7804fc57\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 00:41:07.018521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3984506692.mount: Deactivated successfully. May 9 00:41:07.020914 containerd[1456]: time="2025-05-09T00:41:07.020874590Z" level=info msg="CreateContainer within sandbox \"f5cfacb8b47e49564eda28f6c43edfb8e580846cd37589b3777dec9e7804fc57\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5ae9337e41aa6e450c61852ad57095fa18efbe9f0a1f522058deab3f836fc5b2\"" May 9 00:41:07.021374 containerd[1456]: time="2025-05-09T00:41:07.021338984Z" level=info msg="StartContainer for \"5ae9337e41aa6e450c61852ad57095fa18efbe9f0a1f522058deab3f836fc5b2\"" May 9 00:41:07.022607 kubelet[1760]: I0509 00:41:07.022456 1760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-4pv2t" podStartSLOduration=2.151764258 podStartE2EDuration="4.022440718s" podCreationTimestamp="2025-05-09 00:41:03 +0000 UTC" firstStartedPulling="2025-05-09 00:41:04.395338304 +0000 UTC m=+55.443707214" lastFinishedPulling="2025-05-09 00:41:06.266014764 +0000 UTC m=+57.314383674" observedRunningTime="2025-05-09 00:41:07.022178574 +0000 UTC m=+58.070547474" watchObservedRunningTime="2025-05-09 00:41:07.022440718 +0000 UTC m=+58.070809628" May 9 00:41:07.048055 systemd[1]: Started cri-containerd-5ae9337e41aa6e450c61852ad57095fa18efbe9f0a1f522058deab3f836fc5b2.scope - libcontainer container 5ae9337e41aa6e450c61852ad57095fa18efbe9f0a1f522058deab3f836fc5b2. May 9 00:41:07.069879 systemd[1]: cri-containerd-5ae9337e41aa6e450c61852ad57095fa18efbe9f0a1f522058deab3f836fc5b2.scope: Deactivated successfully. May 9 00:41:07.071895 containerd[1456]: time="2025-05-09T00:41:07.071849479Z" level=info msg="StartContainer for \"5ae9337e41aa6e450c61852ad57095fa18efbe9f0a1f522058deab3f836fc5b2\" returns successfully" May 9 00:41:07.278300 containerd[1456]: time="2025-05-09T00:41:07.278140062Z" level=info msg="shim disconnected" id=5ae9337e41aa6e450c61852ad57095fa18efbe9f0a1f522058deab3f836fc5b2 namespace=k8s.io May 9 00:41:07.278300 containerd[1456]: time="2025-05-09T00:41:07.278196508Z" level=warning msg="cleaning up after shim disconnected" id=5ae9337e41aa6e450c61852ad57095fa18efbe9f0a1f522058deab3f836fc5b2 namespace=k8s.io May 9 00:41:07.278300 containerd[1456]: time="2025-05-09T00:41:07.278205806Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:41:07.316903 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ae9337e41aa6e450c61852ad57095fa18efbe9f0a1f522058deab3f836fc5b2-rootfs.mount: Deactivated successfully. May 9 00:41:07.444385 kubelet[1760]: E0509 00:41:07.444325 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:41:08.008183 kubelet[1760]: E0509 00:41:08.008142 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:41:08.008341 kubelet[1760]: E0509 00:41:08.008273 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:41:08.009904 containerd[1456]: time="2025-05-09T00:41:08.009859216Z" level=info msg="CreateContainer within sandbox \"f5cfacb8b47e49564eda28f6c43edfb8e580846cd37589b3777dec9e7804fc57\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 00:41:08.029890 containerd[1456]: time="2025-05-09T00:41:08.029848947Z" level=info msg="CreateContainer within sandbox \"f5cfacb8b47e49564eda28f6c43edfb8e580846cd37589b3777dec9e7804fc57\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9ade4623ec38ee1c25585f609bacc4d7ceddbe010ed14b718d5b3bfcc6f34635\"" May 9 00:41:08.030347 containerd[1456]: time="2025-05-09T00:41:08.030287511Z" level=info msg="StartContainer for \"9ade4623ec38ee1c25585f609bacc4d7ceddbe010ed14b718d5b3bfcc6f34635\"" May 9 00:41:08.061129 systemd[1]: Started cri-containerd-9ade4623ec38ee1c25585f609bacc4d7ceddbe010ed14b718d5b3bfcc6f34635.scope - libcontainer container 9ade4623ec38ee1c25585f609bacc4d7ceddbe010ed14b718d5b3bfcc6f34635. May 9 00:41:08.088585 containerd[1456]: time="2025-05-09T00:41:08.088541226Z" level=info msg="StartContainer for \"9ade4623ec38ee1c25585f609bacc4d7ceddbe010ed14b718d5b3bfcc6f34635\" returns successfully" May 9 00:41:08.317076 systemd[1]: run-containerd-runc-k8s.io-9ade4623ec38ee1c25585f609bacc4d7ceddbe010ed14b718d5b3bfcc6f34635-runc.NMgvA6.mount: Deactivated successfully. May 9 00:41:08.444837 kubelet[1760]: E0509 00:41:08.444802 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:41:08.512968 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 9 00:41:09.012591 kubelet[1760]: E0509 00:41:09.012559 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:41:09.028718 kubelet[1760]: I0509 00:41:09.028588 1760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-87qls" podStartSLOduration=6.028571519 podStartE2EDuration="6.028571519s" podCreationTimestamp="2025-05-09 00:41:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:41:09.02788621 +0000 UTC m=+60.076255120" watchObservedRunningTime="2025-05-09 00:41:09.028571519 +0000 UTC m=+60.076940429" May 9 00:41:09.407011 kubelet[1760]: E0509 00:41:09.406980 1760 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:41:09.419222 containerd[1456]: time="2025-05-09T00:41:09.419154306Z" level=info msg="StopPodSandbox for \"29b78281fba5bcb1c06544d313ebd59b9f76341d56557a98fee4bcc9865785d7\"" May 9 00:41:09.419554 containerd[1456]: time="2025-05-09T00:41:09.419254174Z" level=info msg="TearDown network for sandbox \"29b78281fba5bcb1c06544d313ebd59b9f76341d56557a98fee4bcc9865785d7\" successfully" May 9 00:41:09.419554 containerd[1456]: time="2025-05-09T00:41:09.419267108Z" level=info msg="StopPodSandbox for \"29b78281fba5bcb1c06544d313ebd59b9f76341d56557a98fee4bcc9865785d7\" returns successfully" May 9 00:41:09.420171 containerd[1456]: time="2025-05-09T00:41:09.419785153Z" level=info msg="RemovePodSandbox for \"29b78281fba5bcb1c06544d313ebd59b9f76341d56557a98fee4bcc9865785d7\"" May 9 00:41:09.420171 containerd[1456]: time="2025-05-09T00:41:09.419824206Z" level=info msg="Forcibly stopping sandbox \"29b78281fba5bcb1c06544d313ebd59b9f76341d56557a98fee4bcc9865785d7\"" May 9 00:41:09.420171 containerd[1456]: time="2025-05-09T00:41:09.419888627Z" level=info msg="TearDown network for sandbox \"29b78281fba5bcb1c06544d313ebd59b9f76341d56557a98fee4bcc9865785d7\" successfully" May 9 00:41:09.425673 containerd[1456]: time="2025-05-09T00:41:09.425634796Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"29b78281fba5bcb1c06544d313ebd59b9f76341d56557a98fee4bcc9865785d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 00:41:09.425804 containerd[1456]: time="2025-05-09T00:41:09.425779088Z" level=info msg="RemovePodSandbox \"29b78281fba5bcb1c06544d313ebd59b9f76341d56557a98fee4bcc9865785d7\" returns successfully" May 9 00:41:09.445504 kubelet[1760]: E0509 00:41:09.445460 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:41:10.315644 kubelet[1760]: E0509 00:41:10.315611 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:41:10.445651 kubelet[1760]: E0509 00:41:10.445604 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:41:11.446265 kubelet[1760]: E0509 00:41:11.446222 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:41:11.544762 systemd-networkd[1403]: lxc_health: Link UP May 9 00:41:11.556082 systemd-networkd[1403]: lxc_health: Gained carrier May 9 00:41:12.317021 kubelet[1760]: E0509 00:41:12.316688 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:41:12.446537 kubelet[1760]: E0509 00:41:12.446477 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:41:12.819163 systemd-networkd[1403]: lxc_health: Gained IPv6LL May 9 00:41:13.019229 kubelet[1760]: E0509 00:41:13.019195 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:41:13.447443 kubelet[1760]: E0509 00:41:13.447411 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:41:14.020190 kubelet[1760]: E0509 00:41:14.020159 1760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:41:14.447926 kubelet[1760]: E0509 00:41:14.447877 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:41:15.448595 kubelet[1760]: E0509 00:41:15.448536 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:41:16.448994 kubelet[1760]: E0509 00:41:16.448957 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:41:17.449729 kubelet[1760]: E0509 00:41:17.449685 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:41:17.814215 systemd[1]: run-containerd-runc-k8s.io-9ade4623ec38ee1c25585f609bacc4d7ceddbe010ed14b718d5b3bfcc6f34635-runc.hru7LP.mount: Deactivated successfully. May 9 00:41:18.450185 kubelet[1760]: E0509 00:41:18.450148 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:41:19.450673 kubelet[1760]: E0509 00:41:19.450644 1760 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"