Jan 17 00:41:38.028344 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:41:38.028378 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:41:38.028395 kernel: BIOS-provided physical RAM map: Jan 17 00:41:38.028404 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 00:41:38.028413 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 17 00:41:38.028421 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 17 00:41:38.028431 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 17 00:41:38.028440 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 17 00:41:38.028449 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 17 00:41:38.028458 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 17 00:41:38.028470 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 17 00:41:38.028479 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 17 00:41:38.028507 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 17 00:41:38.028517 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 17 00:41:38.028544 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 17 00:41:38.028554 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 17 00:41:38.028567 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 17 00:41:38.028577 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 17 00:41:38.028586 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 17 00:41:38.028596 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 17 00:41:38.028605 kernel: NX (Execute Disable) protection: active Jan 17 00:41:38.028614 kernel: APIC: Static calls initialized Jan 17 00:41:38.028624 kernel: efi: EFI v2.7 by EDK II Jan 17 00:41:38.028633 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Jan 17 00:41:38.028643 kernel: SMBIOS 2.8 present. Jan 17 00:41:38.028652 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 17 00:41:38.028661 kernel: Hypervisor detected: KVM Jan 17 00:41:38.028674 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:41:38.028684 kernel: kvm-clock: using sched offset of 13097364387 cycles Jan 17 00:41:38.028694 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:41:38.028703 kernel: tsc: Detected 2445.424 MHz processor Jan 17 00:41:38.028713 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:41:38.028723 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:41:38.028733 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 17 00:41:38.028743 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 00:41:38.028753 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:41:38.028766 kernel: Using GB pages for direct mapping Jan 17 00:41:38.028776 kernel: Secure boot disabled Jan 17 00:41:38.028785 kernel: ACPI: Early table checksum verification disabled Jan 17 00:41:38.028795 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 17 00:41:38.028810 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 17 00:41:38.028821 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:41:38.028831 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:41:38.028845 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 17 00:41:38.028855 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:41:38.028881 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:41:38.028892 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:41:38.028902 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:41:38.028912 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 17 00:41:38.028922 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 17 00:41:38.028937 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 17 00:41:38.029086 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 17 00:41:38.029248 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 17 00:41:38.029261 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 17 00:41:38.029272 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 17 00:41:38.029282 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 17 00:41:38.029292 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 17 00:41:38.029302 kernel: No NUMA configuration found Jan 17 00:41:38.029331 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 17 00:41:38.029348 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 17 00:41:38.029359 kernel: Zone ranges: Jan 17 00:41:38.029369 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:41:38.029379 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 17 00:41:38.029389 kernel: Normal empty Jan 17 00:41:38.029399 kernel: Movable zone start for each node Jan 17 00:41:38.029409 kernel: Early memory node ranges Jan 17 00:41:38.029419 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 00:41:38.029429 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 17 00:41:38.029443 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 17 00:41:38.029454 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 17 00:41:38.029464 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 17 00:41:38.029474 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 17 00:41:38.029499 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 17 00:41:38.029510 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:41:38.029520 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 00:41:38.029530 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 17 00:41:38.029540 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:41:38.029550 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 17 00:41:38.029565 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 17 00:41:38.029575 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 17 00:41:38.029585 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 00:41:38.029595 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:41:38.029605 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:41:38.029615 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 00:41:38.029626 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:41:38.029636 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:41:38.029646 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:41:38.029660 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:41:38.029670 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:41:38.029680 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 00:41:38.029690 kernel: TSC deadline timer available Jan 17 00:41:38.029700 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 17 00:41:38.029710 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 00:41:38.029721 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 17 00:41:38.029731 kernel: kvm-guest: setup PV sched yield Jan 17 00:41:38.029741 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 17 00:41:38.029755 kernel: Booting paravirtualized kernel on KVM Jan 17 00:41:38.029765 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:41:38.029831 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 17 00:41:38.029842 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 17 00:41:38.029852 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 17 00:41:38.029862 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 17 00:41:38.029872 kernel: kvm-guest: PV spinlocks enabled Jan 17 00:41:38.029882 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:41:38.029893 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:41:38.029926 kernel: random: crng init done Jan 17 00:41:38.029937 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:41:38.029947 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:41:38.029957 kernel: Fallback order for Node 0: 0 Jan 17 00:41:38.029967 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 17 00:41:38.030016 kernel: Policy zone: DMA32 Jan 17 00:41:38.030027 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:41:38.030065 kernel: Memory: 2400612K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 166128K reserved, 0K cma-reserved) Jan 17 00:41:38.030084 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 17 00:41:38.030095 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:41:38.030105 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:41:38.030115 kernel: Dynamic Preempt: voluntary Jan 17 00:41:38.030212 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:41:38.030240 kernel: rcu: RCU event tracing is enabled. Jan 17 00:41:38.030255 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 17 00:41:38.030267 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:41:38.030278 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:41:38.030289 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:41:38.030300 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:41:38.030310 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 17 00:41:38.030697 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 17 00:41:38.030709 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:41:38.030720 kernel: Console: colour dummy device 80x25 Jan 17 00:41:38.030731 kernel: printk: console [ttyS0] enabled Jan 17 00:41:38.030760 kernel: ACPI: Core revision 20230628 Jan 17 00:41:38.030776 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 00:41:38.030787 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:41:38.030798 kernel: x2apic enabled Jan 17 00:41:38.030808 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:41:38.030819 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 17 00:41:38.030830 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 17 00:41:38.030841 kernel: kvm-guest: setup PV IPIs Jan 17 00:41:38.030851 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 00:41:38.030862 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 17 00:41:38.030877 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 17 00:41:38.030887 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 00:41:38.030898 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 17 00:41:38.030909 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 17 00:41:38.030920 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:41:38.030930 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 00:41:38.030941 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:41:38.030952 kernel: Speculative Store Bypass: Vulnerable Jan 17 00:41:38.030962 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 17 00:41:38.030978 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 17 00:41:38.030988 kernel: active return thunk: srso_alias_return_thunk Jan 17 00:41:38.030999 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 17 00:41:38.031010 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 17 00:41:38.031036 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:41:38.031077 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:41:38.031088 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:41:38.031099 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:41:38.031115 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:41:38.031280 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 17 00:41:38.031295 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:41:38.031306 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:41:38.031317 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:41:38.031327 kernel: landlock: Up and running. Jan 17 00:41:38.031338 kernel: SELinux: Initializing. Jan 17 00:41:38.031349 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:41:38.031360 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:41:38.031378 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 17 00:41:38.031389 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:41:38.031399 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:41:38.031410 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:41:38.031421 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 17 00:41:38.031432 kernel: signal: max sigframe size: 1776 Jan 17 00:41:38.031442 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:41:38.031453 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:41:38.031464 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:41:38.031479 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:41:38.031490 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:41:38.031501 kernel: .... node #0, CPUs: #1 #2 #3 Jan 17 00:41:38.031511 kernel: smp: Brought up 1 node, 4 CPUs Jan 17 00:41:38.031522 kernel: smpboot: Max logical packages: 1 Jan 17 00:41:38.031533 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 17 00:41:38.031543 kernel: devtmpfs: initialized Jan 17 00:41:38.031554 kernel: x86/mm: Memory block size: 128MB Jan 17 00:41:38.031565 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 17 00:41:38.031580 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 17 00:41:38.031591 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 17 00:41:38.031602 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 17 00:41:38.031613 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 17 00:41:38.031624 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:41:38.031634 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 17 00:41:38.031645 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:41:38.031656 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:41:38.031750 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:41:38.031768 kernel: audit: type=2000 audit(1768610491.907:1): state=initialized audit_enabled=0 res=1 Jan 17 00:41:38.031779 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:41:38.031789 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:41:38.031800 kernel: cpuidle: using governor menu Jan 17 00:41:38.031811 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:41:38.031821 kernel: dca service started, version 1.12.1 Jan 17 00:41:38.031832 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 17 00:41:38.031843 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 17 00:41:38.031858 kernel: PCI: Using configuration type 1 for base access Jan 17 00:41:38.031868 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:41:38.031879 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:41:38.031890 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:41:38.031901 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:41:38.031911 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:41:38.031922 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:41:38.031933 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:41:38.031943 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:41:38.031958 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:41:38.031969 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:41:38.031979 kernel: ACPI: Interpreter enabled Jan 17 00:41:38.031990 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 00:41:38.032001 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:41:38.032012 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:41:38.032022 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 00:41:38.032033 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 00:41:38.032075 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:41:38.035363 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:41:38.035584 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 17 00:41:38.035772 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 17 00:41:38.035786 kernel: PCI host bridge to bus 0000:00 Jan 17 00:41:38.036381 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:41:38.036560 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:41:38.036728 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:41:38.036905 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 17 00:41:38.039399 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 00:41:38.039587 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 17 00:41:38.039757 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:41:38.044005 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 00:41:38.044386 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 17 00:41:38.044592 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 17 00:41:38.044775 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 17 00:41:38.044956 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 17 00:41:38.045387 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 17 00:41:38.045893 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 00:41:38.046943 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 00:41:38.047216 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 17 00:41:38.047416 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 17 00:41:38.047693 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 17 00:41:38.047952 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 17 00:41:38.052420 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 17 00:41:38.052785 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 17 00:41:38.053740 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 17 00:41:38.054579 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:41:38.054808 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 17 00:41:38.054996 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 17 00:41:38.055618 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 17 00:41:38.056350 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 17 00:41:38.056931 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 00:41:38.057210 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 00:41:38.057462 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 00:41:38.057664 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 17 00:41:38.057845 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 17 00:41:38.058428 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 00:41:38.058623 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 17 00:41:38.058639 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:41:38.058651 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:41:38.059220 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:41:38.059240 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:41:38.059251 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 00:41:38.059262 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 00:41:38.059272 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 00:41:38.059283 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 00:41:38.059294 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 00:41:38.059305 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 00:41:38.059315 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 00:41:38.059328 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 00:41:38.059342 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 00:41:38.059353 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 00:41:38.059364 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 00:41:38.059375 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 00:41:38.059386 kernel: iommu: Default domain type: Translated Jan 17 00:41:38.059397 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:41:38.059407 kernel: efivars: Registered efivars operations Jan 17 00:41:38.059418 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:41:38.059429 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:41:38.059444 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 17 00:41:38.059455 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 17 00:41:38.059466 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 17 00:41:38.059476 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 17 00:41:38.060493 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 00:41:38.060686 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 00:41:38.061401 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 00:41:38.061422 kernel: vgaarb: loaded Jan 17 00:41:38.061440 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 00:41:38.061452 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 00:41:38.061463 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:41:38.061474 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:41:38.061485 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:41:38.061496 kernel: pnp: PnP ACPI init Jan 17 00:41:38.061807 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 17 00:41:38.061882 kernel: pnp: PnP ACPI: found 6 devices Jan 17 00:41:38.061894 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:41:38.061911 kernel: NET: Registered PF_INET protocol family Jan 17 00:41:38.061923 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:41:38.061934 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 00:41:38.061945 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:41:38.061956 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:41:38.061967 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 00:41:38.061978 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 00:41:38.061989 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:41:38.062004 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:41:38.062014 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:41:38.062025 kernel: NET: Registered PF_XDP protocol family Jan 17 00:41:38.062678 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 17 00:41:38.062873 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 17 00:41:38.063123 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:41:38.063347 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:41:38.063516 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:41:38.063692 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 17 00:41:38.063860 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 17 00:41:38.064033 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 17 00:41:38.064454 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:41:38.064466 kernel: Initialise system trusted keyrings Jan 17 00:41:38.064478 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 00:41:38.064488 kernel: Key type asymmetric registered Jan 17 00:41:38.064499 kernel: Asymmetric key parser 'x509' registered Jan 17 00:41:38.064510 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:41:38.064527 kernel: io scheduler mq-deadline registered Jan 17 00:41:38.064538 kernel: io scheduler kyber registered Jan 17 00:41:38.064549 kernel: io scheduler bfq registered Jan 17 00:41:38.064560 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:41:38.064572 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 00:41:38.064583 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 00:41:38.064593 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 17 00:41:38.064604 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:41:38.066841 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:41:38.066888 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:41:38.066900 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:41:38.066911 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:41:38.067841 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 17 00:41:38.067862 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:41:38.068078 kernel: rtc_cmos 00:04: registered as rtc0 Jan 17 00:41:38.068317 kernel: rtc_cmos 00:04: setting system clock to 2026-01-17T00:41:36 UTC (1768610496) Jan 17 00:41:38.068492 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 17 00:41:38.068513 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 17 00:41:38.068525 kernel: efifb: probing for efifb Jan 17 00:41:38.068536 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 17 00:41:38.068547 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 17 00:41:38.068557 kernel: efifb: scrolling: redraw Jan 17 00:41:38.068568 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 17 00:41:38.068579 kernel: Console: switching to colour frame buffer device 100x37 Jan 17 00:41:38.068590 kernel: fb0: EFI VGA frame buffer device Jan 17 00:41:38.068601 kernel: pstore: Using crash dump compression: deflate Jan 17 00:41:38.068616 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:41:38.068627 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:41:38.068638 kernel: Segment Routing with IPv6 Jan 17 00:41:38.068648 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:41:38.068659 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:41:38.068670 kernel: Key type dns_resolver registered Jan 17 00:41:38.068681 kernel: IPI shorthand broadcast: enabled Jan 17 00:41:38.068718 kernel: sched_clock: Marking stable (3321021122, 704265826)->(4621482852, -596195904) Jan 17 00:41:38.068733 kernel: registered taskstats version 1 Jan 17 00:41:38.068748 kernel: Loading compiled-in X.509 certificates Jan 17 00:41:38.068760 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:41:38.068771 kernel: Key type .fscrypt registered Jan 17 00:41:38.068782 kernel: Key type fscrypt-provisioning registered Jan 17 00:41:38.068793 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:41:38.068805 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:41:38.068816 kernel: ima: No architecture policies found Jan 17 00:41:38.068827 kernel: clk: Disabling unused clocks Jan 17 00:41:38.068838 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:41:38.068853 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:41:38.068865 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:41:38.068876 kernel: Run /init as init process Jan 17 00:41:38.068887 kernel: with arguments: Jan 17 00:41:38.068899 kernel: /init Jan 17 00:41:38.068910 kernel: with environment: Jan 17 00:41:38.068922 kernel: HOME=/ Jan 17 00:41:38.068933 kernel: TERM=linux Jan 17 00:41:38.068947 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:41:38.068965 systemd[1]: Detected virtualization kvm. Jan 17 00:41:38.068976 systemd[1]: Detected architecture x86-64. Jan 17 00:41:38.068988 systemd[1]: Running in initrd. Jan 17 00:41:38.068999 systemd[1]: No hostname configured, using default hostname. Jan 17 00:41:38.069010 systemd[1]: Hostname set to . Jan 17 00:41:38.069022 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:41:38.069034 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:41:38.075191 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:41:38.075206 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:41:38.075219 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:41:38.075232 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:41:38.075244 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:41:38.075267 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:41:38.075281 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:41:38.075294 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:41:38.075306 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:41:38.075318 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:41:38.075330 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:41:38.075346 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:41:38.075358 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:41:38.075370 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:41:38.075382 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:41:38.075394 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:41:38.075406 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:41:38.075418 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:41:38.075429 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:41:38.075441 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:41:38.075457 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:41:38.075469 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:41:38.075481 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:41:38.075493 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:41:38.075505 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:41:38.075517 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:41:38.075529 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:41:38.075541 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:41:38.075553 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:41:38.075569 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:41:38.075581 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:41:38.075631 systemd-journald[194]: Collecting audit messages is disabled. Jan 17 00:41:38.075658 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:41:38.075676 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:41:38.075688 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:41:38.075700 systemd-journald[194]: Journal started Jan 17 00:41:38.075728 systemd-journald[194]: Runtime Journal (/run/log/journal/b6f3fb943bae44e7bbf9414f1028d2fc) is 6.0M, max 48.3M, 42.2M free. Jan 17 00:41:38.076920 systemd-modules-load[195]: Inserted module 'overlay' Jan 17 00:41:38.082192 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:41:38.090243 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:41:38.111745 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:41:38.117422 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:41:38.120426 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:41:38.192219 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:41:38.199353 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:41:38.261687 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:41:38.262438 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:41:38.270508 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:41:38.295931 dracut-cmdline[226]: dracut-dracut-053 Jan 17 00:41:38.311961 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:41:38.356745 kernel: Bridge firewalling registered Jan 17 00:41:38.356701 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 17 00:41:38.361097 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:41:38.386644 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:41:38.421760 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:41:38.445891 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:41:38.564866 systemd-resolved[262]: Positive Trust Anchors: Jan 17 00:41:38.564974 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:41:38.565017 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:41:38.569787 systemd-resolved[262]: Defaulting to hostname 'linux'. Jan 17 00:41:38.573199 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:41:38.697849 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:41:38.760290 kernel: SCSI subsystem initialized Jan 17 00:41:38.775791 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:41:38.802334 kernel: iscsi: registered transport (tcp) Jan 17 00:41:38.856385 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:41:38.857027 kernel: QLogic iSCSI HBA Driver Jan 17 00:41:38.990021 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:41:39.015395 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:41:39.115123 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:41:39.115255 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:41:39.115277 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:41:39.242877 kernel: raid6: avx2x4 gen() 9829 MB/s Jan 17 00:41:39.256370 kernel: raid6: avx2x2 gen() 15190 MB/s Jan 17 00:41:39.282815 kernel: raid6: avx2x1 gen() 9145 MB/s Jan 17 00:41:39.283586 kernel: raid6: using algorithm avx2x2 gen() 15190 MB/s Jan 17 00:41:39.308822 kernel: raid6: .... xor() 14240 MB/s, rmw enabled Jan 17 00:41:39.309569 kernel: raid6: using avx2x2 recovery algorithm Jan 17 00:41:39.344014 kernel: xor: automatically using best checksumming function avx Jan 17 00:41:39.735444 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:41:39.780116 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:41:39.825619 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:41:39.859949 systemd-udevd[421]: Using default interface naming scheme 'v255'. Jan 17 00:41:39.876732 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:41:39.906920 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:41:39.963033 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Jan 17 00:41:40.062544 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:41:40.098763 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:41:40.339379 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:41:40.376477 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:41:40.452333 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:41:40.470725 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:41:40.488397 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:41:40.548243 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:41:40.507847 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:41:40.558025 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:41:40.584532 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:41:40.585021 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:41:40.603969 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:41:40.632940 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:41:40.646273 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:41:40.677555 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:41:40.730981 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:41:40.760460 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 17 00:41:40.745160 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:41:40.802986 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 17 00:41:40.805629 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:41:40.858278 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:41:40.858329 kernel: GPT:9289727 != 19775487 Jan 17 00:41:40.858373 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:41:40.858394 kernel: GPT:9289727 != 19775487 Jan 17 00:41:40.858411 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:41:40.858429 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:41:40.806104 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:41:40.962938 kernel: libata version 3.00 loaded. Jan 17 00:41:40.963548 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:41:40.964725 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:41:40.995914 kernel: AES CTR mode by8 optimization enabled Jan 17 00:41:41.044668 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:41:41.142993 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:41:41.203096 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 00:41:41.233345 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (475) Jan 17 00:41:41.233373 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (485) Jan 17 00:41:41.291503 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 00:41:41.344698 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:41:41.359894 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 00:41:41.372540 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 00:41:41.396605 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:41:41.436446 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:41:41.466189 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 00:41:41.466822 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 00:41:41.466849 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 00:41:41.467724 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 00:41:41.468422 disk-uuid[564]: Primary Header is updated. Jan 17 00:41:41.468422 disk-uuid[564]: Secondary Entries is updated. Jan 17 00:41:41.468422 disk-uuid[564]: Secondary Header is updated. Jan 17 00:41:41.540830 kernel: scsi host0: ahci Jan 17 00:41:41.542653 kernel: scsi host1: ahci Jan 17 00:41:41.542927 kernel: scsi host2: ahci Jan 17 00:41:41.543246 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:41:41.543264 kernel: scsi host3: ahci Jan 17 00:41:41.543480 kernel: scsi host4: ahci Jan 17 00:41:41.543697 kernel: scsi host5: ahci Jan 17 00:41:41.543911 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 17 00:41:41.543927 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 17 00:41:41.543948 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 17 00:41:41.543962 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 17 00:41:41.543977 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 17 00:41:41.543991 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 17 00:41:41.544005 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:41:41.825317 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 00:41:41.825398 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 17 00:41:41.837201 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 17 00:41:41.844216 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 00:41:41.850507 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 00:41:41.859413 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 00:41:41.860714 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 17 00:41:41.871030 kernel: ata3.00: applying bridge limits Jan 17 00:41:41.871121 kernel: ata3.00: configured for UDMA/100 Jan 17 00:41:41.904346 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 17 00:41:42.031629 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 17 00:41:42.032994 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:41:42.102453 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 17 00:41:42.104408 kernel: hrtimer: interrupt took 3360576 ns Jan 17 00:41:42.593224 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:41:42.593297 disk-uuid[566]: The operation has completed successfully. Jan 17 00:41:42.737771 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:41:42.737962 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:41:42.798580 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:41:42.833705 sh[607]: Success Jan 17 00:41:42.950556 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 17 00:41:43.038987 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:41:43.086187 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:41:43.104989 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:41:43.182558 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:41:43.182691 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:41:43.190604 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:41:43.197204 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:41:43.197268 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:41:43.229751 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:41:43.244661 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:41:43.265256 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:41:43.281722 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:41:43.327179 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:41:43.327243 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:41:43.327262 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:41:43.352420 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:41:43.405504 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:41:43.425023 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:41:43.444407 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:41:43.498407 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:41:43.750570 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:41:43.783802 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:41:43.799569 ignition[723]: Ignition 2.19.0 Jan 17 00:41:43.799592 ignition[723]: Stage: fetch-offline Jan 17 00:41:43.803376 ignition[723]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:41:43.803405 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:41:43.803734 ignition[723]: parsed url from cmdline: "" Jan 17 00:41:43.803744 ignition[723]: no config URL provided Jan 17 00:41:43.803756 ignition[723]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:41:43.803776 ignition[723]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:41:43.803853 ignition[723]: op(1): [started] loading QEMU firmware config module Jan 17 00:41:43.803869 ignition[723]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 17 00:41:43.867647 ignition[723]: op(1): [finished] loading QEMU firmware config module Jan 17 00:41:43.868279 ignition[723]: parsing config with SHA512: ec4c1038b9eabf948f76c3f8f21e7ac589f041b58fefddfd78285592b8555a81296f0c1a6bcb1dae9cfef180c143cfbeec84cff310653d49bad829ba30edadb0 Jan 17 00:41:43.881640 unknown[723]: fetched base config from "system" Jan 17 00:41:43.881676 unknown[723]: fetched user config from "qemu" Jan 17 00:41:43.882119 ignition[723]: fetch-offline: fetch-offline passed Jan 17 00:41:43.890368 systemd-networkd[792]: lo: Link UP Jan 17 00:41:43.882321 ignition[723]: Ignition finished successfully Jan 17 00:41:43.890375 systemd-networkd[792]: lo: Gained carrier Jan 17 00:41:43.900662 systemd-networkd[792]: Enumeration completed Jan 17 00:41:43.902537 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:41:43.903550 systemd-networkd[792]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:41:43.903558 systemd-networkd[792]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:41:43.910194 systemd-networkd[792]: eth0: Link UP Jan 17 00:41:43.910201 systemd-networkd[792]: eth0: Gained carrier Jan 17 00:41:43.910216 systemd-networkd[792]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:41:43.947799 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:41:44.017942 systemd[1]: Reached target network.target - Network. Jan 17 00:41:44.020276 systemd-networkd[792]: eth0: DHCPv4 address 10.0.0.120/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 00:41:44.033726 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 00:41:44.077375 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:41:44.177265 ignition[799]: Ignition 2.19.0 Jan 17 00:41:44.177299 ignition[799]: Stage: kargs Jan 17 00:41:44.177589 ignition[799]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:41:44.177607 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:41:44.192852 ignition[799]: kargs: kargs passed Jan 17 00:41:44.192927 ignition[799]: Ignition finished successfully Jan 17 00:41:44.241950 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:41:44.294097 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:41:44.333001 systemd-resolved[262]: Detected conflict on linux IN A 10.0.0.120 Jan 17 00:41:44.333535 systemd-resolved[262]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Jan 17 00:41:44.352695 ignition[807]: Ignition 2.19.0 Jan 17 00:41:44.352725 ignition[807]: Stage: disks Jan 17 00:41:44.352991 ignition[807]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:41:44.353008 ignition[807]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:41:44.358735 ignition[807]: disks: disks passed Jan 17 00:41:44.377007 ignition[807]: Ignition finished successfully Jan 17 00:41:44.391735 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:41:44.398554 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:41:44.414712 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:41:44.430978 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:41:44.440362 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:41:44.452048 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:41:44.501686 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:41:44.635629 systemd-fsck[817]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 00:41:44.655858 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:41:44.699173 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:41:45.087534 kernel: EXT4-fs (vda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:41:45.089824 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:41:45.102780 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:41:45.107032 systemd-networkd[792]: eth0: Gained IPv6LL Jan 17 00:41:45.191956 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:41:45.214902 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:41:45.244202 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 00:41:45.244288 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:41:45.244326 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:41:45.276537 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:41:45.297409 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (825) Jan 17 00:41:45.306438 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:41:45.328214 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:41:45.328261 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:41:45.328278 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:41:45.338195 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:41:45.345120 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:41:45.514920 initrd-setup-root[849]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:41:45.562492 initrd-setup-root[856]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:41:45.584822 initrd-setup-root[863]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:41:45.629506 initrd-setup-root[870]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:41:46.073899 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:41:46.110298 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:41:46.136807 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:41:46.161375 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:41:46.212371 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:41:46.390202 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:41:46.431489 ignition[937]: INFO : Ignition 2.19.0 Jan 17 00:41:46.431489 ignition[937]: INFO : Stage: mount Jan 17 00:41:46.431489 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:41:46.431489 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:41:46.431489 ignition[937]: INFO : mount: mount passed Jan 17 00:41:46.431489 ignition[937]: INFO : Ignition finished successfully Jan 17 00:41:46.441997 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:41:46.459339 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:41:46.540479 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:41:46.583537 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (951) Jan 17 00:41:46.593266 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:41:46.593342 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:41:46.608240 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:41:46.639420 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:41:46.659011 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:41:46.831576 ignition[969]: INFO : Ignition 2.19.0 Jan 17 00:41:46.831576 ignition[969]: INFO : Stage: files Jan 17 00:41:46.831576 ignition[969]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:41:46.831576 ignition[969]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:41:46.866451 ignition[969]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:41:46.866451 ignition[969]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:41:46.866451 ignition[969]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:41:46.866451 ignition[969]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:41:46.866451 ignition[969]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:41:46.917318 ignition[969]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:41:46.917318 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:41:46.917318 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:41:46.917318 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:41:46.917318 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:41:46.917318 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:41:46.917318 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:41:46.917318 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:41:46.917318 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 17 00:41:46.866492 unknown[969]: wrote ssh authorized keys file for user: core Jan 17 00:41:47.311298 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 17 00:41:50.879311 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:41:50.879311 ignition[969]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 17 00:41:50.900004 ignition[969]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 00:41:50.900004 ignition[969]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 00:41:50.900004 ignition[969]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 17 00:41:50.900004 ignition[969]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 00:41:51.197633 ignition[969]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 00:41:51.223434 ignition[969]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 00:41:51.254656 ignition[969]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 00:41:51.265025 ignition[969]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:41:51.265025 ignition[969]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:41:51.265025 ignition[969]: INFO : files: files passed Jan 17 00:41:51.265025 ignition[969]: INFO : Ignition finished successfully Jan 17 00:41:51.283585 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:41:51.315650 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:41:51.354525 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:41:51.370120 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:41:51.379234 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:41:51.457804 initrd-setup-root-after-ignition[997]: grep: /sysroot/oem/oem-release: No such file or directory Jan 17 00:41:51.470532 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:41:51.470532 initrd-setup-root-after-ignition[999]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:41:51.513332 initrd-setup-root-after-ignition[1003]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:41:51.557655 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:41:51.571625 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:41:51.607450 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:41:51.731400 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:41:51.731791 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:41:51.742649 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:41:51.752609 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:41:51.762748 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:41:51.781460 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:41:51.947570 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:41:51.998559 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:41:52.081115 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:41:52.088588 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:41:52.092557 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:41:52.126242 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:41:52.126483 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:41:52.177428 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:41:52.189854 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:41:52.216189 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:41:52.263634 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:41:52.268462 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:41:52.269022 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:41:52.302393 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:41:52.311565 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:41:52.333398 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:41:52.356633 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:41:52.363844 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:41:52.364317 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:41:52.395861 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:41:52.487121 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:41:52.502667 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:41:52.503249 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:41:52.518448 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:41:52.520627 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:41:52.560966 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:41:52.561294 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:41:52.598907 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:41:52.601977 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:41:52.603027 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:41:52.654411 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:41:52.689565 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:41:52.701374 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:41:52.701526 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:41:52.703160 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:41:52.703292 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:41:52.703492 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:41:52.703668 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:41:52.703846 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:41:52.704001 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:41:52.766633 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:41:52.782315 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:41:52.841846 ignition[1023]: INFO : Ignition 2.19.0 Jan 17 00:41:52.841846 ignition[1023]: INFO : Stage: umount Jan 17 00:41:52.841846 ignition[1023]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:41:52.841846 ignition[1023]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:41:52.841846 ignition[1023]: INFO : umount: umount passed Jan 17 00:41:52.841846 ignition[1023]: INFO : Ignition finished successfully Jan 17 00:41:52.848658 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:41:52.848976 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:41:52.875481 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:41:52.875688 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:41:52.999716 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:41:53.001116 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:41:53.001329 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:41:53.037564 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:41:53.037874 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:41:53.047939 systemd[1]: Stopped target network.target - Network. Jan 17 00:41:53.081781 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:41:53.082632 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:41:53.093539 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:41:53.093938 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:41:53.102376 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:41:53.102481 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:41:53.146288 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:41:53.146450 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:41:53.158937 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:41:53.159052 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:41:53.166556 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:41:53.178429 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:41:53.228837 systemd-networkd[792]: eth0: DHCPv6 lease lost Jan 17 00:41:53.229961 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:41:53.230268 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:41:53.242706 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:41:53.243884 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:41:53.257493 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:41:53.257695 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:41:53.275456 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:41:53.275569 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:41:53.312475 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:41:53.334255 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:41:53.334390 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:41:53.367524 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:41:53.367650 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:41:53.378413 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:41:53.378540 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:41:53.387422 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:41:53.387652 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:41:53.449419 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:41:53.493176 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:41:53.500057 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:41:53.520289 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:41:53.524753 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:41:53.548612 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:41:53.548732 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:41:53.555183 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:41:53.555268 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:41:53.570631 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:41:53.570731 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:41:53.576551 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:41:53.576644 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:41:53.592673 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:41:53.592764 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:41:53.643578 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:41:53.665793 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:41:53.665892 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:41:53.673207 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 00:41:53.673312 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:41:53.723350 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:41:53.723462 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:41:53.804762 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:41:53.805381 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:41:53.835846 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:41:53.838294 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:41:53.848497 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:41:53.874544 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:41:53.902493 systemd[1]: Switching root. Jan 17 00:41:53.965059 systemd-journald[194]: Journal stopped Jan 17 00:41:56.676975 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 17 00:41:56.677118 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:41:56.677323 kernel: SELinux: policy capability open_perms=1 Jan 17 00:41:56.677352 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:41:56.677373 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:41:56.677394 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:41:56.677414 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:41:56.677440 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:41:56.677457 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:41:56.677635 systemd[1]: Successfully loaded SELinux policy in 90.946ms. Jan 17 00:41:56.677679 kernel: audit: type=1403 audit(1768610514.358:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:41:56.677699 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 36.973ms. Jan 17 00:41:56.677719 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:41:56.677741 systemd[1]: Detected virtualization kvm. Jan 17 00:41:56.677782 systemd[1]: Detected architecture x86-64. Jan 17 00:41:56.677801 systemd[1]: Detected first boot. Jan 17 00:41:56.677851 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:41:56.677869 zram_generator::config[1067]: No configuration found. Jan 17 00:41:56.677889 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:41:56.677907 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:41:56.677925 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:41:56.677943 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:41:56.677964 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:41:56.677985 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:41:56.678007 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:41:56.678026 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:41:56.678044 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:41:56.678063 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:41:56.678116 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:41:56.678175 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:41:56.678195 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:41:56.678214 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:41:56.678233 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:41:56.678261 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:41:56.678284 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:41:56.678307 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:41:56.678326 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:41:56.678344 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:41:56.678362 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:41:56.678381 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:41:56.678399 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:41:56.678427 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:41:56.678445 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:41:56.678463 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:41:56.678488 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:41:56.678509 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:41:56.678528 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:41:56.678546 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:41:56.678564 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:41:56.678587 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:41:56.678605 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:41:56.678623 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:41:56.678641 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:41:56.678659 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:41:56.678676 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:41:56.678695 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:41:56.678712 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:41:56.678730 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:41:56.678751 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:41:56.678770 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:41:56.678788 systemd[1]: Reached target machines.target - Containers. Jan 17 00:41:56.678806 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:41:56.678824 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:41:56.678842 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:41:56.678859 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:41:56.678879 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:41:56.678898 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:41:56.678919 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:41:56.678937 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:41:56.678954 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:41:56.678973 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:41:56.678990 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:41:56.679008 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:41:56.679026 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:41:56.679045 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:41:56.679067 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:41:56.679117 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:41:56.679172 kernel: fuse: init (API version 7.39) Jan 17 00:41:56.679191 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:41:56.679210 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:41:56.679227 kernel: loop: module loaded Jan 17 00:41:56.679246 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:41:56.679267 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:41:56.679288 systemd[1]: Stopped verity-setup.service. Jan 17 00:41:56.679316 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:41:56.679370 systemd-journald[1151]: Collecting audit messages is disabled. Jan 17 00:41:56.679422 kernel: ACPI: bus type drm_connector registered Jan 17 00:41:56.679441 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:41:56.679459 systemd-journald[1151]: Journal started Jan 17 00:41:56.679493 systemd-journald[1151]: Runtime Journal (/run/log/journal/b6f3fb943bae44e7bbf9414f1028d2fc) is 6.0M, max 48.3M, 42.2M free. Jan 17 00:41:55.649983 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:41:55.681519 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 00:41:55.683668 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:41:55.684279 systemd[1]: systemd-journald.service: Consumed 1.684s CPU time. Jan 17 00:41:56.691186 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:41:56.696328 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:41:56.705861 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:41:56.716896 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:41:56.729683 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:41:56.741709 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:41:56.751426 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:41:56.756927 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:41:56.763963 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:41:56.764377 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:41:56.770602 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:41:56.771511 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:41:56.785371 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:41:56.787211 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:41:56.794620 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:41:56.797403 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:41:56.801839 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:41:56.802217 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:41:56.809866 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:41:56.810254 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:41:56.814034 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:41:56.818500 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:41:56.825491 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:41:56.853991 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:41:56.870406 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:41:56.880177 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:41:56.886677 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:41:56.886763 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:41:56.897548 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:41:56.920886 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:41:56.955729 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:41:56.962933 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:41:56.968953 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:41:56.981684 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:41:56.986290 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:41:56.987999 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:41:56.993896 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:41:56.997943 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:41:57.007813 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:41:57.024690 systemd-journald[1151]: Time spent on flushing to /var/log/journal/b6f3fb943bae44e7bbf9414f1028d2fc is 47.786ms for 972 entries. Jan 17 00:41:57.024690 systemd-journald[1151]: System Journal (/var/log/journal/b6f3fb943bae44e7bbf9414f1028d2fc) is 8.0M, max 195.6M, 187.6M free. Jan 17 00:41:57.128983 systemd-journald[1151]: Received client request to flush runtime journal. Jan 17 00:41:57.129239 kernel: loop0: detected capacity change from 0 to 229808 Jan 17 00:41:57.020484 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:41:57.034822 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:41:57.042425 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:41:57.054213 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:41:57.061861 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:41:57.072248 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:41:57.096400 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:41:57.117019 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:41:57.133356 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Jan 17 00:41:57.133386 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Jan 17 00:41:57.144582 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:41:57.150608 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:41:57.156990 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:41:57.160206 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:41:57.164765 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:41:57.182201 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:41:57.189675 udevadm[1193]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:41:57.203262 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:41:57.203986 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:41:57.215197 kernel: loop1: detected capacity change from 0 to 142488 Jan 17 00:41:57.267284 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:41:57.282431 kernel: loop2: detected capacity change from 0 to 140768 Jan 17 00:41:57.282314 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:41:57.314561 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Jan 17 00:41:57.315064 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Jan 17 00:41:57.325771 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:41:57.353188 kernel: loop3: detected capacity change from 0 to 229808 Jan 17 00:41:57.383206 kernel: loop4: detected capacity change from 0 to 142488 Jan 17 00:41:57.415187 kernel: loop5: detected capacity change from 0 to 140768 Jan 17 00:41:57.437588 (sd-merge)[1208]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 17 00:41:57.438599 (sd-merge)[1208]: Merged extensions into '/usr'. Jan 17 00:41:57.448405 systemd[1]: Reloading requested from client PID 1181 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:41:57.448442 systemd[1]: Reloading... Jan 17 00:41:57.521173 zram_generator::config[1233]: No configuration found. Jan 17 00:41:57.702467 ldconfig[1176]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:41:57.716370 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:41:57.794768 systemd[1]: Reloading finished in 345 ms. Jan 17 00:41:57.859844 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:41:57.868790 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:41:57.877208 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:41:57.907850 systemd[1]: Starting ensure-sysext.service... Jan 17 00:41:57.913063 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:41:57.931389 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:41:57.947326 systemd[1]: Reloading requested from client PID 1272 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:41:57.947506 systemd[1]: Reloading... Jan 17 00:41:57.976212 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:41:57.976739 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:41:57.978880 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:41:57.979511 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Jan 17 00:41:57.979721 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Jan 17 00:41:57.988694 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:41:57.988714 systemd-tmpfiles[1273]: Skipping /boot Jan 17 00:41:58.010849 systemd-udevd[1274]: Using default interface naming scheme 'v255'. Jan 17 00:41:58.011583 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:41:58.011593 systemd-tmpfiles[1273]: Skipping /boot Jan 17 00:41:58.078485 zram_generator::config[1302]: No configuration found. Jan 17 00:41:58.311978 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1331) Jan 17 00:41:58.459291 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 00:41:58.485046 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:41:58.551359 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 00:41:58.551470 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 17 00:41:58.551967 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 00:41:58.552365 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 00:41:58.554620 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 00:41:58.548006 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:41:58.908988 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:41:58.916229 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 00:41:58.916326 systemd[1]: Reloading finished in 968 ms. Jan 17 00:41:59.252409 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:41:59.384404 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:41:59.784762 systemd[1]: Finished ensure-sysext.service. Jan 17 00:41:59.818356 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:41:59.896811 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:41:59.998216 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:42:00.007874 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:42:00.013462 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:42:00.072389 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:42:00.102883 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:42:00.161444 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:42:00.171244 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:42:00.176427 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:42:00.212407 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:42:00.261803 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:42:00.276244 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:42:00.284475 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:42:00.296378 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 00:42:00.315627 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:42:00.355862 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:42:00.359571 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:42:00.361050 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:42:00.367197 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:42:00.458207 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:42:00.458451 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:42:00.462520 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:42:00.462730 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:42:00.471463 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:42:00.471928 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:42:00.478285 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:42:00.500518 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:42:00.500748 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:42:00.546625 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:42:00.553911 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:42:00.557182 augenrules[1407]: No rules Jan 17 00:42:00.562548 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:42:00.578175 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:42:00.590945 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:42:00.601885 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:42:00.602444 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:42:00.650528 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:42:00.761941 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:42:00.794715 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:42:00.863217 kernel: kvm_amd: TSC scaling supported Jan 17 00:42:00.863337 kernel: kvm_amd: Nested Virtualization enabled Jan 17 00:42:00.866279 kernel: kvm_amd: Nested Paging enabled Jan 17 00:42:00.870688 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 17 00:42:00.870742 kernel: kvm_amd: PMU virtualization is disabled Jan 17 00:42:01.125650 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 00:42:01.148510 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:42:01.154232 systemd-resolved[1393]: Positive Trust Anchors: Jan 17 00:42:01.154277 systemd-resolved[1393]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:42:01.154324 systemd-resolved[1393]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:42:01.167199 systemd-networkd[1390]: lo: Link UP Jan 17 00:42:01.167212 systemd-networkd[1390]: lo: Gained carrier Jan 17 00:42:01.169929 systemd-resolved[1393]: Defaulting to hostname 'linux'. Jan 17 00:42:01.171414 systemd-networkd[1390]: Enumeration completed Jan 17 00:42:01.171539 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:42:01.178576 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:42:01.178608 systemd-networkd[1390]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:42:01.181778 systemd-networkd[1390]: eth0: Link UP Jan 17 00:42:01.181970 systemd-networkd[1390]: eth0: Gained carrier Jan 17 00:42:01.183293 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:42:01.184421 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:42:01.189902 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:42:01.194519 systemd[1]: Reached target network.target - Network. Jan 17 00:42:01.197828 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:42:01.261223 systemd-networkd[1390]: eth0: DHCPv4 address 10.0.0.120/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 00:42:01.272368 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. Jan 17 00:42:00.713046 systemd-resolved[1393]: Clock change detected. Flushing caches. Jan 17 00:42:00.727289 systemd-journald[1151]: Time jumped backwards, rotating. Jan 17 00:42:00.713221 systemd-timesyncd[1394]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 17 00:42:00.713915 systemd-timesyncd[1394]: Initial clock synchronization to Sat 2026-01-17 00:42:00.712938 UTC. Jan 17 00:42:01.018619 kernel: EDAC MC: Ver: 3.0.0 Jan 17 00:42:01.093088 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:42:01.108403 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:42:01.126065 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:42:01.194903 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:42:01.208331 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:42:01.215076 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:42:01.229993 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:42:01.238520 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:42:01.293011 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:42:01.300541 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:42:01.312449 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:42:01.327456 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:42:01.327617 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:42:01.333967 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:42:01.382344 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:42:01.395602 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:42:01.413974 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:42:01.426638 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:42:01.434947 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:42:01.449118 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:42:01.449251 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:42:01.449379 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:42:01.449422 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:42:01.458061 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:42:01.513001 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:42:01.520374 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:42:01.553114 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:42:01.588409 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:42:01.592364 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:42:01.605013 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:42:01.623011 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:42:01.638623 jq[1441]: false Jan 17 00:42:01.684551 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:42:01.706701 extend-filesystems[1442]: Found loop3 Jan 17 00:42:01.706701 extend-filesystems[1442]: Found loop4 Jan 17 00:42:01.706701 extend-filesystems[1442]: Found loop5 Jan 17 00:42:01.706701 extend-filesystems[1442]: Found sr0 Jan 17 00:42:01.706701 extend-filesystems[1442]: Found vda Jan 17 00:42:01.706701 extend-filesystems[1442]: Found vda1 Jan 17 00:42:01.706701 extend-filesystems[1442]: Found vda2 Jan 17 00:42:01.706701 extend-filesystems[1442]: Found vda3 Jan 17 00:42:01.706701 extend-filesystems[1442]: Found usr Jan 17 00:42:01.706701 extend-filesystems[1442]: Found vda4 Jan 17 00:42:01.706701 extend-filesystems[1442]: Found vda6 Jan 17 00:42:01.706701 extend-filesystems[1442]: Found vda7 Jan 17 00:42:01.706701 extend-filesystems[1442]: Found vda9 Jan 17 00:42:01.706701 extend-filesystems[1442]: Checking size of /dev/vda9 Jan 17 00:42:02.044155 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1327) Jan 17 00:42:02.044321 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 17 00:42:02.044344 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 17 00:42:01.716878 dbus-daemon[1440]: [system] SELinux support is enabled Jan 17 00:42:01.718586 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:42:02.045572 extend-filesystems[1442]: Resized partition /dev/vda9 Jan 17 00:42:01.737461 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:42:02.058412 extend-filesystems[1460]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:42:02.058412 extend-filesystems[1460]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 00:42:02.058412 extend-filesystems[1460]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 00:42:02.058412 extend-filesystems[1460]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 17 00:42:01.738346 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:42:02.098630 extend-filesystems[1442]: Resized filesystem in /dev/vda9 Jan 17 00:42:02.105297 update_engine[1459]: I20260117 00:42:01.883512 1459 main.cc:92] Flatcar Update Engine starting Jan 17 00:42:02.105297 update_engine[1459]: I20260117 00:42:01.896538 1459 update_check_scheduler.cc:74] Next update check in 9m45s Jan 17 00:42:01.748543 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:42:02.105937 jq[1462]: true Jan 17 00:42:01.817939 systemd-networkd[1390]: eth0: Gained IPv6LL Jan 17 00:42:01.820581 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:42:01.857528 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:42:01.902545 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:42:01.921300 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:42:01.983421 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:42:01.985964 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:42:01.987437 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:42:01.987709 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:42:01.997959 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:42:01.998237 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:42:02.012483 systemd-logind[1452]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 00:42:02.012511 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:42:02.015277 systemd-logind[1452]: New seat seat0. Jan 17 00:42:02.026065 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:42:02.026406 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:42:02.045348 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:42:02.113525 jq[1465]: true Jan 17 00:42:02.114592 (ntainerd)[1467]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:42:02.187027 dbus-daemon[1440]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 00:42:02.205250 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:42:02.214240 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:42:02.220337 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:42:02.222242 bash[1493]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:42:02.239253 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 00:42:02.248027 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:42:02.260106 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:42:02.281658 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:42:02.282201 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:42:02.286886 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:42:02.287080 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:42:02.296640 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:42:02.321731 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:42:02.363704 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:42:02.395077 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 00:42:02.399728 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 00:42:02.400141 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 00:42:02.405532 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:42:02.411709 locksmithd[1498]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:42:02.482126 containerd[1467]: time="2026-01-17T00:42:02.481894864Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:42:02.513731 sshd_keygen[1463]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:42:02.521515 containerd[1467]: time="2026-01-17T00:42:02.521250154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:42:02.529377 containerd[1467]: time="2026-01-17T00:42:02.528058136Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:42:02.529377 containerd[1467]: time="2026-01-17T00:42:02.528329614Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:42:02.531929 containerd[1467]: time="2026-01-17T00:42:02.531885674Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:42:02.532179 containerd[1467]: time="2026-01-17T00:42:02.532140540Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:42:02.532248 containerd[1467]: time="2026-01-17T00:42:02.532187127Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:42:02.532296 containerd[1467]: time="2026-01-17T00:42:02.532281313Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:42:02.532324 containerd[1467]: time="2026-01-17T00:42:02.532298525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:42:02.532589 containerd[1467]: time="2026-01-17T00:42:02.532542951Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:42:02.532589 containerd[1467]: time="2026-01-17T00:42:02.532568990Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:42:02.532589 containerd[1467]: time="2026-01-17T00:42:02.532585591Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:42:02.532747 containerd[1467]: time="2026-01-17T00:42:02.532598195Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:42:02.532747 containerd[1467]: time="2026-01-17T00:42:02.532710204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:42:02.533332 containerd[1467]: time="2026-01-17T00:42:02.533286770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:42:02.533532 containerd[1467]: time="2026-01-17T00:42:02.533477967Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:42:02.533532 containerd[1467]: time="2026-01-17T00:42:02.533522410Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:42:02.533730 containerd[1467]: time="2026-01-17T00:42:02.533663804Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:42:02.533878 containerd[1467]: time="2026-01-17T00:42:02.533764122Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:42:02.550905 containerd[1467]: time="2026-01-17T00:42:02.550595057Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:42:02.550905 containerd[1467]: time="2026-01-17T00:42:02.550744897Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:42:02.550905 containerd[1467]: time="2026-01-17T00:42:02.550852878Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:42:02.550905 containerd[1467]: time="2026-01-17T00:42:02.550888785Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:42:02.550905 containerd[1467]: time="2026-01-17T00:42:02.550911518Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:42:02.550905 containerd[1467]: time="2026-01-17T00:42:02.551204926Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:42:02.550905 containerd[1467]: time="2026-01-17T00:42:02.551528039Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:42:02.550905 containerd[1467]: time="2026-01-17T00:42:02.551739604Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:42:02.550905 containerd[1467]: time="2026-01-17T00:42:02.551766525Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:42:02.552527 containerd[1467]: time="2026-01-17T00:42:02.551884725Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:42:02.552527 containerd[1467]: time="2026-01-17T00:42:02.551910313Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:42:02.552527 containerd[1467]: time="2026-01-17T00:42:02.551939598Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:42:02.552527 containerd[1467]: time="2026-01-17T00:42:02.551956600Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:42:02.552527 containerd[1467]: time="2026-01-17T00:42:02.551979833Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:42:02.552527 containerd[1467]: time="2026-01-17T00:42:02.552002184Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:42:02.552527 containerd[1467]: time="2026-01-17T00:42:02.552023715Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:42:02.552527 containerd[1467]: time="2026-01-17T00:42:02.552041558Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:42:02.552527 containerd[1467]: time="2026-01-17T00:42:02.552060995Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:42:02.552527 containerd[1467]: time="2026-01-17T00:42:02.552089448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:42:02.552527 containerd[1467]: time="2026-01-17T00:42:02.552117400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:42:02.552527 containerd[1467]: time="2026-01-17T00:42:02.552141425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:42:02.552527 containerd[1467]: time="2026-01-17T00:42:02.552159799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:42:02.552527 containerd[1467]: time="2026-01-17T00:42:02.552187782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:42:02.553168 containerd[1467]: time="2026-01-17T00:42:02.552209171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:42:02.553168 containerd[1467]: time="2026-01-17T00:42:02.552228137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:42:02.553168 containerd[1467]: time="2026-01-17T00:42:02.552246801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:42:02.553168 containerd[1467]: time="2026-01-17T00:42:02.552270115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:42:02.553168 containerd[1467]: time="2026-01-17T00:42:02.552291545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:42:02.553168 containerd[1467]: time="2026-01-17T00:42:02.552307946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:42:02.553168 containerd[1467]: time="2026-01-17T00:42:02.552325198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:42:02.553168 containerd[1467]: time="2026-01-17T00:42:02.552342320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:42:02.553168 containerd[1467]: time="2026-01-17T00:42:02.552362337Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:42:02.553168 containerd[1467]: time="2026-01-17T00:42:02.552390280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:42:02.553168 containerd[1467]: time="2026-01-17T00:42:02.552421739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:42:02.553168 containerd[1467]: time="2026-01-17T00:42:02.552437557Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:42:02.553168 containerd[1467]: time="2026-01-17T00:42:02.552494995Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:42:02.553168 containerd[1467]: time="2026-01-17T00:42:02.552534599Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:42:02.553478 containerd[1467]: time="2026-01-17T00:42:02.552553875Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:42:02.553478 containerd[1467]: time="2026-01-17T00:42:02.552571197Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:42:02.553478 containerd[1467]: time="2026-01-17T00:42:02.552585284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:42:02.553478 containerd[1467]: time="2026-01-17T00:42:02.552611903Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:42:02.553478 containerd[1467]: time="2026-01-17T00:42:02.552640687Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:42:02.553478 containerd[1467]: time="2026-01-17T00:42:02.552662728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:42:02.556583 containerd[1467]: time="2026-01-17T00:42:02.553526001Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:42:02.556583 containerd[1467]: time="2026-01-17T00:42:02.555037112Z" level=info msg="Connect containerd service" Jan 17 00:42:02.579357 containerd[1467]: time="2026-01-17T00:42:02.563529961Z" level=info msg="using legacy CRI server" Jan 17 00:42:02.579357 containerd[1467]: time="2026-01-17T00:42:02.563690130Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:42:02.579357 containerd[1467]: time="2026-01-17T00:42:02.564329524Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:42:02.583367 containerd[1467]: time="2026-01-17T00:42:02.582174016Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:42:02.583367 containerd[1467]: time="2026-01-17T00:42:02.582973879Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:42:02.583367 containerd[1467]: time="2026-01-17T00:42:02.583053588Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:42:02.583367 containerd[1467]: time="2026-01-17T00:42:02.583113510Z" level=info msg="Start subscribing containerd event" Jan 17 00:42:02.583367 containerd[1467]: time="2026-01-17T00:42:02.583155488Z" level=info msg="Start recovering state" Jan 17 00:42:02.583367 containerd[1467]: time="2026-01-17T00:42:02.583243553Z" level=info msg="Start event monitor" Jan 17 00:42:02.583367 containerd[1467]: time="2026-01-17T00:42:02.583264843Z" level=info msg="Start snapshots syncer" Jan 17 00:42:02.583367 containerd[1467]: time="2026-01-17T00:42:02.583276915Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:42:02.583367 containerd[1467]: time="2026-01-17T00:42:02.583287224Z" level=info msg="Start streaming server" Jan 17 00:42:02.611908 containerd[1467]: time="2026-01-17T00:42:02.611154076Z" level=info msg="containerd successfully booted in 0.131613s" Jan 17 00:42:02.583555 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:42:02.627422 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:42:02.640991 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:42:02.660169 systemd[1]: Started sshd@0-10.0.0.120:22-10.0.0.1:49660.service - OpenSSH per-connection server daemon (10.0.0.1:49660). Jan 17 00:42:02.686338 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:42:02.688240 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:42:02.718405 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:42:02.787962 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:42:02.820256 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:42:02.837360 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:42:02.846177 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:42:02.909643 sshd[1535]: Accepted publickey for core from 10.0.0.1 port 49660 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:42:02.914746 sshd[1535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:42:02.931893 systemd-logind[1452]: New session 1 of user core. Jan 17 00:42:02.933388 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:42:02.994280 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:42:03.030287 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:42:03.055501 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:42:03.112110 (systemd)[1546]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:42:06.920238 systemd[1546]: Queued start job for default target default.target. Jan 17 00:42:06.949476 systemd[1546]: Created slice app.slice - User Application Slice. Jan 17 00:42:06.949522 systemd[1546]: Reached target paths.target - Paths. Jan 17 00:42:06.949543 systemd[1546]: Reached target timers.target - Timers. Jan 17 00:42:06.997748 systemd[1546]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:42:07.073772 systemd[1546]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:42:07.075452 systemd[1546]: Reached target sockets.target - Sockets. Jan 17 00:42:07.075474 systemd[1546]: Reached target basic.target - Basic System. Jan 17 00:42:07.075544 systemd[1546]: Reached target default.target - Main User Target. Jan 17 00:42:07.075597 systemd[1546]: Startup finished in 3.944s. Jan 17 00:42:07.077476 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:42:07.097384 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:42:07.323622 systemd[1]: Started sshd@1-10.0.0.120:22-10.0.0.1:56282.service - OpenSSH per-connection server daemon (10.0.0.1:56282). Jan 17 00:42:07.452580 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 56282 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:42:07.456590 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:42:07.474925 systemd-logind[1452]: New session 2 of user core. Jan 17 00:42:07.487669 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:42:07.613480 sshd[1557]: pam_unix(sshd:session): session closed for user core Jan 17 00:42:07.636245 systemd[1]: sshd@1-10.0.0.120:22-10.0.0.1:56282.service: Deactivated successfully. Jan 17 00:42:07.643772 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:42:07.670135 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:42:07.782489 systemd[1]: Started sshd@2-10.0.0.120:22-10.0.0.1:56292.service - OpenSSH per-connection server daemon (10.0.0.1:56292). Jan 17 00:42:07.798542 systemd-logind[1452]: Removed session 2. Jan 17 00:42:07.902114 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 56292 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:42:07.906401 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:42:07.922570 systemd-logind[1452]: New session 3 of user core. Jan 17 00:42:07.935717 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:42:08.193649 sshd[1564]: pam_unix(sshd:session): session closed for user core Jan 17 00:42:08.202361 systemd[1]: sshd@2-10.0.0.120:22-10.0.0.1:56292.service: Deactivated successfully. Jan 17 00:42:08.205359 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:42:08.212581 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:42:08.221013 systemd-logind[1452]: Removed session 3. Jan 17 00:42:09.425963 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:42:09.428164 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:42:09.429596 systemd[1]: Startup finished in 3.650s (kernel) + 17.281s (initrd) + 15.723s (userspace) = 36.655s. Jan 17 00:42:09.430635 (kubelet)[1579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:42:13.002620 kubelet[1579]: E0117 00:42:13.001612 1579 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:42:13.010887 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:42:13.011245 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:42:13.012667 systemd[1]: kubelet.service: Consumed 7.107s CPU time. Jan 17 00:42:18.244238 systemd[1]: Started sshd@3-10.0.0.120:22-10.0.0.1:32784.service - OpenSSH per-connection server daemon (10.0.0.1:32784). Jan 17 00:42:18.313117 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 32784 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:42:18.319044 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:42:18.332492 systemd-logind[1452]: New session 4 of user core. Jan 17 00:42:18.342199 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:42:18.431547 sshd[1590]: pam_unix(sshd:session): session closed for user core Jan 17 00:42:18.451351 systemd[1]: sshd@3-10.0.0.120:22-10.0.0.1:32784.service: Deactivated successfully. Jan 17 00:42:18.454256 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:42:18.472006 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:42:18.481598 systemd[1]: Started sshd@4-10.0.0.120:22-10.0.0.1:32788.service - OpenSSH per-connection server daemon (10.0.0.1:32788). Jan 17 00:42:18.485435 systemd-logind[1452]: Removed session 4. Jan 17 00:42:18.629073 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 32788 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:42:18.629720 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:42:18.649147 systemd-logind[1452]: New session 5 of user core. Jan 17 00:42:18.673520 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:42:18.752194 sshd[1597]: pam_unix(sshd:session): session closed for user core Jan 17 00:42:18.784937 systemd[1]: sshd@4-10.0.0.120:22-10.0.0.1:32788.service: Deactivated successfully. Jan 17 00:42:18.787059 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:42:18.806008 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:42:18.824119 systemd[1]: Started sshd@5-10.0.0.120:22-10.0.0.1:32802.service - OpenSSH per-connection server daemon (10.0.0.1:32802). Jan 17 00:42:18.827771 systemd-logind[1452]: Removed session 5. Jan 17 00:42:18.896630 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 32802 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:42:18.898172 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:42:18.914657 systemd-logind[1452]: New session 6 of user core. Jan 17 00:42:18.929236 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:42:19.026556 sshd[1604]: pam_unix(sshd:session): session closed for user core Jan 17 00:42:19.043378 systemd[1]: sshd@5-10.0.0.120:22-10.0.0.1:32802.service: Deactivated successfully. Jan 17 00:42:19.049165 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:42:19.053472 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:42:19.068538 systemd[1]: Started sshd@6-10.0.0.120:22-10.0.0.1:32818.service - OpenSSH per-connection server daemon (10.0.0.1:32818). Jan 17 00:42:19.071736 systemd-logind[1452]: Removed session 6. Jan 17 00:42:19.126596 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 32818 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:42:19.132522 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:42:19.165049 systemd-logind[1452]: New session 7 of user core. Jan 17 00:42:19.182149 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:42:19.495597 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:42:19.497486 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:42:19.572076 sudo[1614]: pam_unix(sudo:session): session closed for user root Jan 17 00:42:19.592903 sshd[1611]: pam_unix(sshd:session): session closed for user core Jan 17 00:42:19.619125 systemd[1]: sshd@6-10.0.0.120:22-10.0.0.1:32818.service: Deactivated successfully. Jan 17 00:42:19.625288 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:42:19.629644 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:42:19.644914 systemd[1]: Started sshd@7-10.0.0.120:22-10.0.0.1:32826.service - OpenSSH per-connection server daemon (10.0.0.1:32826). Jan 17 00:42:19.649695 systemd-logind[1452]: Removed session 7. Jan 17 00:42:19.784281 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 32826 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:42:19.793714 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:42:19.934927 systemd-logind[1452]: New session 8 of user core. Jan 17 00:42:19.951246 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:42:20.137965 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:42:20.138413 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:42:20.152396 sudo[1623]: pam_unix(sudo:session): session closed for user root Jan 17 00:42:20.175991 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:42:20.176528 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:42:20.225334 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:42:20.233966 auditctl[1626]: No rules Jan 17 00:42:20.236592 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:42:20.237068 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:42:20.245726 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:42:20.408571 augenrules[1644]: No rules Jan 17 00:42:20.417650 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:42:20.421208 sudo[1622]: pam_unix(sudo:session): session closed for user root Jan 17 00:42:20.446387 sshd[1619]: pam_unix(sshd:session): session closed for user core Jan 17 00:42:20.490513 systemd[1]: sshd@7-10.0.0.120:22-10.0.0.1:32826.service: Deactivated successfully. Jan 17 00:42:20.496447 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:42:20.499019 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:42:20.519613 systemd[1]: Started sshd@8-10.0.0.120:22-10.0.0.1:32842.service - OpenSSH per-connection server daemon (10.0.0.1:32842). Jan 17 00:42:20.530048 systemd-logind[1452]: Removed session 8. Jan 17 00:42:20.739720 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 32842 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:42:20.742332 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:42:20.766249 systemd-logind[1452]: New session 9 of user core. Jan 17 00:42:20.775110 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:42:20.900757 sudo[1655]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:42:20.902422 sudo[1655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:42:21.023315 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 00:42:21.123217 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 00:42:21.123618 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 00:42:23.602058 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:42:23.622180 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:42:24.491302 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:42:24.509531 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:42:24.724005 kubelet[1699]: E0117 00:42:24.721415 1699 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:42:24.728629 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:42:24.729028 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:42:24.860519 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:42:24.924219 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:42:25.042969 systemd[1]: Reloading requested from client PID 1718 ('systemctl') (unit session-9.scope)... Jan 17 00:42:25.042997 systemd[1]: Reloading... Jan 17 00:42:25.249107 zram_generator::config[1759]: No configuration found. Jan 17 00:42:25.591089 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:42:25.768368 systemd[1]: Reloading finished in 724 ms. Jan 17 00:42:26.029510 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 00:42:26.029713 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 00:42:26.034226 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:42:26.058611 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:42:26.640228 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:42:26.674743 (kubelet)[1803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:42:26.962299 kubelet[1803]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:42:26.976600 kubelet[1803]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:42:26.976600 kubelet[1803]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:42:26.976600 kubelet[1803]: I0117 00:42:26.971384 1803 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:42:27.822348 kubelet[1803]: I0117 00:42:27.820262 1803 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 17 00:42:27.822348 kubelet[1803]: I0117 00:42:27.821615 1803 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:42:27.830490 kubelet[1803]: I0117 00:42:27.823670 1803 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:42:28.043032 kubelet[1803]: I0117 00:42:28.042470 1803 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:42:28.091193 kubelet[1803]: E0117 00:42:28.090805 1803 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:42:28.091193 kubelet[1803]: I0117 00:42:28.091076 1803 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:42:28.123252 kubelet[1803]: I0117 00:42:28.120943 1803 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:42:28.123252 kubelet[1803]: I0117 00:42:28.121618 1803 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:42:28.123252 kubelet[1803]: I0117 00:42:28.121662 1803 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.120","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:42:28.123252 kubelet[1803]: I0117 00:42:28.122253 1803 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:42:28.124112 kubelet[1803]: I0117 00:42:28.122288 1803 container_manager_linux.go:303] "Creating device plugin manager" Jan 17 00:42:28.124112 kubelet[1803]: I0117 00:42:28.122745 1803 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:42:28.135248 kubelet[1803]: I0117 00:42:28.135188 1803 kubelet.go:480] "Attempting to sync node with API server" Jan 17 00:42:28.135689 kubelet[1803]: I0117 00:42:28.135522 1803 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:42:28.143691 kubelet[1803]: I0117 00:42:28.141220 1803 kubelet.go:386] "Adding apiserver pod source" Jan 17 00:42:28.143691 kubelet[1803]: I0117 00:42:28.141312 1803 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:42:28.143691 kubelet[1803]: E0117 00:42:28.142768 1803 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:28.143691 kubelet[1803]: E0117 00:42:28.143072 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:28.165307 kubelet[1803]: I0117 00:42:28.164077 1803 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:42:28.165307 kubelet[1803]: I0117 00:42:28.165088 1803 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:42:28.166206 kubelet[1803]: W0117 00:42:28.166177 1803 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:42:28.175256 kubelet[1803]: E0117 00:42:28.175120 1803 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:42:28.178399 kubelet[1803]: E0117 00:42:28.178355 1803 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"10.0.0.120\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:42:28.187301 kubelet[1803]: I0117 00:42:28.186557 1803 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:42:28.187301 kubelet[1803]: I0117 00:42:28.186799 1803 server.go:1289] "Started kubelet" Jan 17 00:42:28.187639 kubelet[1803]: I0117 00:42:28.187590 1803 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:42:28.209943 kubelet[1803]: I0117 00:42:28.201949 1803 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:42:28.209943 kubelet[1803]: I0117 00:42:28.202999 1803 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:42:28.209943 kubelet[1803]: I0117 00:42:28.206605 1803 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:42:28.209943 kubelet[1803]: I0117 00:42:28.208577 1803 server.go:317] "Adding debug handlers to kubelet server" Jan 17 00:42:28.215013 kubelet[1803]: I0117 00:42:28.214121 1803 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:42:28.233065 kubelet[1803]: E0117 00:42:28.230090 1803 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.120\" not found" Jan 17 00:42:28.233065 kubelet[1803]: I0117 00:42:28.230266 1803 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:42:28.233065 kubelet[1803]: I0117 00:42:28.230780 1803 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:42:28.233065 kubelet[1803]: I0117 00:42:28.231091 1803 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:42:28.234399 kubelet[1803]: I0117 00:42:28.234369 1803 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:42:28.236777 kubelet[1803]: I0117 00:42:28.235242 1803 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:42:28.236777 kubelet[1803]: E0117 00:42:28.235530 1803 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:42:28.319506 kubelet[1803]: I0117 00:42:28.291536 1803 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:42:28.415462 kubelet[1803]: E0117 00:42:28.410610 1803 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.120\" not found" Jan 17 00:42:28.427443 kubelet[1803]: E0117 00:42:28.426594 1803 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.120\" not found" node="10.0.0.120" Jan 17 00:42:28.494757 kubelet[1803]: I0117 00:42:28.494715 1803 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:42:28.495948 kubelet[1803]: I0117 00:42:28.495918 1803 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:42:28.496434 kubelet[1803]: I0117 00:42:28.496412 1803 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:42:28.516073 kubelet[1803]: E0117 00:42:28.515960 1803 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.120\" not found" Jan 17 00:42:28.594103 kubelet[1803]: I0117 00:42:28.593747 1803 policy_none.go:49] "None policy: Start" Jan 17 00:42:28.594103 kubelet[1803]: I0117 00:42:28.593990 1803 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:42:28.594103 kubelet[1803]: I0117 00:42:28.594078 1803 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:42:28.596520 kubelet[1803]: I0117 00:42:28.596465 1803 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 17 00:42:28.617073 kubelet[1803]: E0117 00:42:28.616431 1803 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.120\" not found" Jan 17 00:42:28.635536 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:42:28.681376 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:42:28.690085 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:42:28.707974 kubelet[1803]: E0117 00:42:28.702979 1803 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:42:28.707974 kubelet[1803]: I0117 00:42:28.703396 1803 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:42:28.707974 kubelet[1803]: I0117 00:42:28.703449 1803 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:42:28.707974 kubelet[1803]: I0117 00:42:28.704083 1803 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:42:28.707974 kubelet[1803]: E0117 00:42:28.705522 1803 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:42:28.707974 kubelet[1803]: E0117 00:42:28.705679 1803 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.120\" not found" Jan 17 00:42:28.818264 kubelet[1803]: I0117 00:42:28.816692 1803 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.120" Jan 17 00:42:28.830532 kubelet[1803]: I0117 00:42:28.830299 1803 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 17 00:42:28.831415 kubelet[1803]: E0117 00:42:28.831368 1803 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": read tcp 10.0.0.120:55228->10.0.0.107:6443: use of closed network connection" node="10.0.0.120" Jan 17 00:42:28.831488 kubelet[1803]: I0117 00:42:28.831438 1803 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jan 17 00:42:28.832789 kubelet[1803]: E0117 00:42:28.830612 1803 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.107:6443/api/v1/namespaces/default/events\": read tcp 10.0.0.120:55228->10.0.0.107:6443: use of closed network connection" event="&Event{ObjectMeta:{10.0.0.120.188b5dec6e9937fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.120,UID:10.0.0.120,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:10.0.0.120,},FirstTimestamp:2026-01-17 00:42:28.722251774 +0000 UTC m=+2.021556691,LastTimestamp:2026-01-17 00:42:28.722251774 +0000 UTC m=+2.021556691,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.120,}" Jan 17 00:42:28.845990 kubelet[1803]: I0117 00:42:28.845786 1803 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 17 00:42:28.847253 kubelet[1803]: I0117 00:42:28.846414 1803 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 17 00:42:28.847253 kubelet[1803]: I0117 00:42:28.846650 1803 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:42:28.848209 kubelet[1803]: I0117 00:42:28.847962 1803 kubelet.go:2436] "Starting kubelet main sync loop" Jan 17 00:42:28.849241 kubelet[1803]: E0117 00:42:28.848796 1803 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 17 00:42:29.034996 kubelet[1803]: I0117 00:42:29.034768 1803 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.120" Jan 17 00:42:29.064928 kubelet[1803]: I0117 00:42:29.064342 1803 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.120" Jan 17 00:42:29.148300 kubelet[1803]: I0117 00:42:29.142762 1803 apiserver.go:52] "Watching apiserver" Jan 17 00:42:29.148300 kubelet[1803]: E0117 00:42:29.144701 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:29.211712 systemd[1]: Created slice kubepods-besteffort-pod18b773aa_87dc_40bf_9f96_c1feb7f307a8.slice - libcontainer container kubepods-besteffort-pod18b773aa_87dc_40bf_9f96_c1feb7f307a8.slice. Jan 17 00:42:29.212004 kubelet[1803]: I0117 00:42:29.211902 1803 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 17 00:42:29.212611 containerd[1467]: time="2026-01-17T00:42:29.212499204Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:42:29.215972 kubelet[1803]: I0117 00:42:29.212794 1803 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 17 00:42:29.234339 kubelet[1803]: I0117 00:42:29.234211 1803 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:42:29.245954 systemd[1]: Created slice kubepods-burstable-pod01a1daf8_06fe_4b2b_872a_d498672270e2.slice - libcontainer container kubepods-burstable-pod01a1daf8_06fe_4b2b_872a_d498672270e2.slice. Jan 17 00:42:29.267167 sudo[1655]: pam_unix(sudo:session): session closed for user root Jan 17 00:42:29.273231 sshd[1652]: pam_unix(sshd:session): session closed for user core Jan 17 00:42:29.283540 systemd[1]: sshd@8-10.0.0.120:22-10.0.0.1:32842.service: Deactivated successfully. Jan 17 00:42:29.286644 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:42:29.287063 systemd[1]: session-9.scope: Consumed 3.094s CPU time, 80.2M memory peak, 0B memory swap peak. Jan 17 00:42:29.291509 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:42:29.293505 systemd-logind[1452]: Removed session 9. Jan 17 00:42:29.355578 kubelet[1803]: I0117 00:42:29.337497 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/18b773aa-87dc-40bf-9f96-c1feb7f307a8-kube-proxy\") pod \"kube-proxy-jdvz4\" (UID: \"18b773aa-87dc-40bf-9f96-c1feb7f307a8\") " pod="kube-system/kube-proxy-jdvz4" Jan 17 00:42:29.363407 kubelet[1803]: I0117 00:42:29.362466 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-cilium-cgroup\") pod \"cilium-7mq9k\" (UID: \"01a1daf8-06fe-4b2b-872a-d498672270e2\") " pod="kube-system/cilium-7mq9k" Jan 17 00:42:29.363407 kubelet[1803]: I0117 00:42:29.362954 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-lib-modules\") pod \"cilium-7mq9k\" (UID: \"01a1daf8-06fe-4b2b-872a-d498672270e2\") " pod="kube-system/cilium-7mq9k" Jan 17 00:42:29.363407 kubelet[1803]: I0117 00:42:29.363076 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-xtables-lock\") pod \"cilium-7mq9k\" (UID: \"01a1daf8-06fe-4b2b-872a-d498672270e2\") " pod="kube-system/cilium-7mq9k" Jan 17 00:42:29.363407 kubelet[1803]: I0117 00:42:29.363175 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01a1daf8-06fe-4b2b-872a-d498672270e2-clustermesh-secrets\") pod \"cilium-7mq9k\" (UID: \"01a1daf8-06fe-4b2b-872a-d498672270e2\") " pod="kube-system/cilium-7mq9k" Jan 17 00:42:29.363407 kubelet[1803]: I0117 00:42:29.363207 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7lh6\" (UniqueName: \"kubernetes.io/projected/01a1daf8-06fe-4b2b-872a-d498672270e2-kube-api-access-n7lh6\") pod \"cilium-7mq9k\" (UID: \"01a1daf8-06fe-4b2b-872a-d498672270e2\") " pod="kube-system/cilium-7mq9k" Jan 17 00:42:29.363407 kubelet[1803]: I0117 00:42:29.363312 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18b773aa-87dc-40bf-9f96-c1feb7f307a8-lib-modules\") pod \"kube-proxy-jdvz4\" (UID: \"18b773aa-87dc-40bf-9f96-c1feb7f307a8\") " pod="kube-system/kube-proxy-jdvz4" Jan 17 00:42:29.364001 kubelet[1803]: I0117 00:42:29.363422 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-cilium-run\") pod \"cilium-7mq9k\" (UID: \"01a1daf8-06fe-4b2b-872a-d498672270e2\") " pod="kube-system/cilium-7mq9k" Jan 17 00:42:29.364001 kubelet[1803]: I0117 00:42:29.363451 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-cni-path\") pod \"cilium-7mq9k\" (UID: \"01a1daf8-06fe-4b2b-872a-d498672270e2\") " pod="kube-system/cilium-7mq9k" Jan 17 00:42:29.364001 kubelet[1803]: I0117 00:42:29.363548 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-hostproc\") pod \"cilium-7mq9k\" (UID: \"01a1daf8-06fe-4b2b-872a-d498672270e2\") " pod="kube-system/cilium-7mq9k" Jan 17 00:42:29.364001 kubelet[1803]: I0117 00:42:29.363636 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-host-proc-sys-net\") pod \"cilium-7mq9k\" (UID: \"01a1daf8-06fe-4b2b-872a-d498672270e2\") " pod="kube-system/cilium-7mq9k" Jan 17 00:42:29.364001 kubelet[1803]: I0117 00:42:29.363670 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-host-proc-sys-kernel\") pod \"cilium-7mq9k\" (UID: \"01a1daf8-06fe-4b2b-872a-d498672270e2\") " pod="kube-system/cilium-7mq9k" Jan 17 00:42:29.366029 kubelet[1803]: I0117 00:42:29.363779 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01a1daf8-06fe-4b2b-872a-d498672270e2-hubble-tls\") pod \"cilium-7mq9k\" (UID: \"01a1daf8-06fe-4b2b-872a-d498672270e2\") " pod="kube-system/cilium-7mq9k" Jan 17 00:42:29.372023 kubelet[1803]: I0117 00:42:29.368617 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18b773aa-87dc-40bf-9f96-c1feb7f307a8-xtables-lock\") pod \"kube-proxy-jdvz4\" (UID: \"18b773aa-87dc-40bf-9f96-c1feb7f307a8\") " pod="kube-system/kube-proxy-jdvz4" Jan 17 00:42:29.372023 kubelet[1803]: I0117 00:42:29.368659 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj89z\" (UniqueName: \"kubernetes.io/projected/18b773aa-87dc-40bf-9f96-c1feb7f307a8-kube-api-access-lj89z\") pod \"kube-proxy-jdvz4\" (UID: \"18b773aa-87dc-40bf-9f96-c1feb7f307a8\") " pod="kube-system/kube-proxy-jdvz4" Jan 17 00:42:29.372023 kubelet[1803]: I0117 00:42:29.368679 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-bpf-maps\") pod \"cilium-7mq9k\" (UID: \"01a1daf8-06fe-4b2b-872a-d498672270e2\") " pod="kube-system/cilium-7mq9k" Jan 17 00:42:29.372023 kubelet[1803]: I0117 00:42:29.368696 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-etc-cni-netd\") pod \"cilium-7mq9k\" (UID: \"01a1daf8-06fe-4b2b-872a-d498672270e2\") " pod="kube-system/cilium-7mq9k" Jan 17 00:42:29.372023 kubelet[1803]: I0117 00:42:29.368712 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01a1daf8-06fe-4b2b-872a-d498672270e2-cilium-config-path\") pod \"cilium-7mq9k\" (UID: \"01a1daf8-06fe-4b2b-872a-d498672270e2\") " pod="kube-system/cilium-7mq9k" Jan 17 00:42:29.533693 kubelet[1803]: E0117 00:42:29.532058 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:29.535753 containerd[1467]: time="2026-01-17T00:42:29.533297072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jdvz4,Uid:18b773aa-87dc-40bf-9f96-c1feb7f307a8,Namespace:kube-system,Attempt:0,}" Jan 17 00:42:29.574137 kubelet[1803]: E0117 00:42:29.573965 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:29.575934 containerd[1467]: time="2026-01-17T00:42:29.574675360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7mq9k,Uid:01a1daf8-06fe-4b2b-872a-d498672270e2,Namespace:kube-system,Attempt:0,}" Jan 17 00:42:30.149571 kubelet[1803]: E0117 00:42:30.149382 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:30.470731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1381559299.mount: Deactivated successfully. Jan 17 00:42:30.485723 containerd[1467]: time="2026-01-17T00:42:30.485544669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:42:30.490995 containerd[1467]: time="2026-01-17T00:42:30.489678113Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:42:30.495772 containerd[1467]: time="2026-01-17T00:42:30.495543946Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 00:42:30.499562 containerd[1467]: time="2026-01-17T00:42:30.499041167Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:42:30.504714 containerd[1467]: time="2026-01-17T00:42:30.501294765Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:42:30.512768 containerd[1467]: time="2026-01-17T00:42:30.512062973Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 978.62724ms" Jan 17 00:42:30.515684 containerd[1467]: time="2026-01-17T00:42:30.513943855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:42:30.515684 containerd[1467]: time="2026-01-17T00:42:30.515031275Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 940.066625ms" Jan 17 00:42:30.995372 containerd[1467]: time="2026-01-17T00:42:30.990637377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:42:30.995372 containerd[1467]: time="2026-01-17T00:42:30.990742553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:42:30.995372 containerd[1467]: time="2026-01-17T00:42:30.990762350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:30.995372 containerd[1467]: time="2026-01-17T00:42:30.990960320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:31.004227 containerd[1467]: time="2026-01-17T00:42:31.004050574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:42:31.004227 containerd[1467]: time="2026-01-17T00:42:31.004166150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:42:31.004227 containerd[1467]: time="2026-01-17T00:42:31.004182420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:31.004790 containerd[1467]: time="2026-01-17T00:42:31.004661605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:31.149979 kubelet[1803]: E0117 00:42:31.149511 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:31.302230 systemd[1]: Started cri-containerd-5be3a35aaae6a0cf92af459474322fb904a17c39aad9e63f68c1fdb024629273.scope - libcontainer container 5be3a35aaae6a0cf92af459474322fb904a17c39aad9e63f68c1fdb024629273. Jan 17 00:42:31.307477 systemd[1]: Started cri-containerd-7394e560fa90a358da112a4a9b3184bd2e0c0ef184a7e0d771219a794180c7e7.scope - libcontainer container 7394e560fa90a358da112a4a9b3184bd2e0c0ef184a7e0d771219a794180c7e7. Jan 17 00:42:31.520358 containerd[1467]: time="2026-01-17T00:42:31.520184752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7mq9k,Uid:01a1daf8-06fe-4b2b-872a-d498672270e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"5be3a35aaae6a0cf92af459474322fb904a17c39aad9e63f68c1fdb024629273\"" Jan 17 00:42:31.525164 kubelet[1803]: E0117 00:42:31.524587 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:31.527112 containerd[1467]: time="2026-01-17T00:42:31.527030011Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 00:42:31.552357 containerd[1467]: time="2026-01-17T00:42:31.552202843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jdvz4,Uid:18b773aa-87dc-40bf-9f96-c1feb7f307a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"7394e560fa90a358da112a4a9b3184bd2e0c0ef184a7e0d771219a794180c7e7\"" Jan 17 00:42:31.554649 kubelet[1803]: E0117 00:42:31.554513 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:32.152482 kubelet[1803]: E0117 00:42:32.152135 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:33.163245 kubelet[1803]: E0117 00:42:33.162148 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:34.168079 kubelet[1803]: E0117 00:42:34.167250 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:35.169965 kubelet[1803]: E0117 00:42:35.168868 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:36.170356 kubelet[1803]: E0117 00:42:36.170097 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:37.253043 kubelet[1803]: E0117 00:42:37.192190 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:38.335032 kubelet[1803]: E0117 00:42:38.318132 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:39.350370 kubelet[1803]: E0117 00:42:39.344762 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:40.347141 kubelet[1803]: E0117 00:42:40.346414 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:41.348392 kubelet[1803]: E0117 00:42:41.347486 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:42.353544 kubelet[1803]: E0117 00:42:42.351679 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:43.790743 kubelet[1803]: E0117 00:42:43.607622 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:44.806502 kubelet[1803]: E0117 00:42:44.806426 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:45.807086 kubelet[1803]: E0117 00:42:45.806599 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:46.789914 update_engine[1459]: I20260117 00:42:46.788946 1459 update_attempter.cc:509] Updating boot flags... Jan 17 00:42:46.808633 kubelet[1803]: E0117 00:42:46.807682 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:46.902269 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1952) Jan 17 00:42:47.017882 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1956) Jan 17 00:42:47.810771 kubelet[1803]: E0117 00:42:47.810681 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:48.142062 kubelet[1803]: E0117 00:42:48.141981 1803 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:48.812558 kubelet[1803]: E0117 00:42:48.812487 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:50.457584 kubelet[1803]: E0117 00:42:50.453793 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:50.625786 kubelet[1803]: E0117 00:42:50.618269 1803 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.768s" Jan 17 00:42:51.539494 kubelet[1803]: E0117 00:42:51.538407 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:51.714571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2970197808.mount: Deactivated successfully. Jan 17 00:42:52.548564 kubelet[1803]: E0117 00:42:52.548395 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:53.549679 kubelet[1803]: E0117 00:42:53.549262 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:54.551404 kubelet[1803]: E0117 00:42:54.550977 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:55.552656 kubelet[1803]: E0117 00:42:55.551729 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:56.559855 kubelet[1803]: E0117 00:42:56.558933 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:57.561863 kubelet[1803]: E0117 00:42:57.560509 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:58.654088 kubelet[1803]: E0117 00:42:58.643000 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:42:59.654860 kubelet[1803]: E0117 00:42:59.654763 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:00.659855 kubelet[1803]: E0117 00:43:00.656424 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:01.883570 kubelet[1803]: E0117 00:43:01.882532 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:02.938906 kubelet[1803]: E0117 00:43:02.909376 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:03.916352 kubelet[1803]: E0117 00:43:03.915871 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:04.993261 kubelet[1803]: E0117 00:43:04.986961 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:05.993070 kubelet[1803]: E0117 00:43:05.992456 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:06.651108 containerd[1467]: time="2026-01-17T00:43:06.650731778Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:06.655312 containerd[1467]: time="2026-01-17T00:43:06.654890092Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 17 00:43:06.663018 containerd[1467]: time="2026-01-17T00:43:06.662436735Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:06.675967 containerd[1467]: time="2026-01-17T00:43:06.669537823Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 35.142433473s" Jan 17 00:43:06.675967 containerd[1467]: time="2026-01-17T00:43:06.669588877Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 17 00:43:06.675967 containerd[1467]: time="2026-01-17T00:43:06.674576395Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 17 00:43:06.722008 containerd[1467]: time="2026-01-17T00:43:06.721298485Z" level=info msg="CreateContainer within sandbox \"5be3a35aaae6a0cf92af459474322fb904a17c39aad9e63f68c1fdb024629273\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:43:06.797018 containerd[1467]: time="2026-01-17T00:43:06.796799208Z" level=info msg="CreateContainer within sandbox \"5be3a35aaae6a0cf92af459474322fb904a17c39aad9e63f68c1fdb024629273\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b571d8d54b95b62e526996aad53bc4c3cc646549f462d2f3d2a49eff6d315180\"" Jan 17 00:43:06.798892 containerd[1467]: time="2026-01-17T00:43:06.798650463Z" level=info msg="StartContainer for \"b571d8d54b95b62e526996aad53bc4c3cc646549f462d2f3d2a49eff6d315180\"" Jan 17 00:43:06.887331 systemd[1]: Started cri-containerd-b571d8d54b95b62e526996aad53bc4c3cc646549f462d2f3d2a49eff6d315180.scope - libcontainer container b571d8d54b95b62e526996aad53bc4c3cc646549f462d2f3d2a49eff6d315180. Jan 17 00:43:06.957924 containerd[1467]: time="2026-01-17T00:43:06.957672401Z" level=info msg="StartContainer for \"b571d8d54b95b62e526996aad53bc4c3cc646549f462d2f3d2a49eff6d315180\" returns successfully" Jan 17 00:43:06.994532 kubelet[1803]: E0117 00:43:06.993285 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:07.033645 systemd[1]: cri-containerd-b571d8d54b95b62e526996aad53bc4c3cc646549f462d2f3d2a49eff6d315180.scope: Deactivated successfully. Jan 17 00:43:07.099602 kubelet[1803]: E0117 00:43:07.099113 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:07.398300 containerd[1467]: time="2026-01-17T00:43:07.395969243Z" level=info msg="shim disconnected" id=b571d8d54b95b62e526996aad53bc4c3cc646549f462d2f3d2a49eff6d315180 namespace=k8s.io Jan 17 00:43:07.398300 containerd[1467]: time="2026-01-17T00:43:07.396056584Z" level=warning msg="cleaning up after shim disconnected" id=b571d8d54b95b62e526996aad53bc4c3cc646549f462d2f3d2a49eff6d315180 namespace=k8s.io Jan 17 00:43:07.398300 containerd[1467]: time="2026-01-17T00:43:07.396068697Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:43:07.759628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b571d8d54b95b62e526996aad53bc4c3cc646549f462d2f3d2a49eff6d315180-rootfs.mount: Deactivated successfully. Jan 17 00:43:07.994552 kubelet[1803]: E0117 00:43:07.994368 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:08.107048 kubelet[1803]: E0117 00:43:08.106242 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:08.126917 containerd[1467]: time="2026-01-17T00:43:08.126787804Z" level=info msg="CreateContainer within sandbox \"5be3a35aaae6a0cf92af459474322fb904a17c39aad9e63f68c1fdb024629273\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:43:08.143674 kubelet[1803]: E0117 00:43:08.143285 1803 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:08.239225 containerd[1467]: time="2026-01-17T00:43:08.239101778Z" level=info msg="CreateContainer within sandbox \"5be3a35aaae6a0cf92af459474322fb904a17c39aad9e63f68c1fdb024629273\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ff07a1babcbdd8e9080b2c5affe510b51e7292a2b0d4232a142db2d7e3fc0d50\"" Jan 17 00:43:08.240307 containerd[1467]: time="2026-01-17T00:43:08.240272734Z" level=info msg="StartContainer for \"ff07a1babcbdd8e9080b2c5affe510b51e7292a2b0d4232a142db2d7e3fc0d50\"" Jan 17 00:43:08.363149 systemd[1]: Started cri-containerd-ff07a1babcbdd8e9080b2c5affe510b51e7292a2b0d4232a142db2d7e3fc0d50.scope - libcontainer container ff07a1babcbdd8e9080b2c5affe510b51e7292a2b0d4232a142db2d7e3fc0d50. Jan 17 00:43:08.468963 containerd[1467]: time="2026-01-17T00:43:08.468751335Z" level=info msg="StartContainer for \"ff07a1babcbdd8e9080b2c5affe510b51e7292a2b0d4232a142db2d7e3fc0d50\" returns successfully" Jan 17 00:43:08.521124 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:43:08.521489 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:43:08.521794 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:43:08.549338 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:43:08.549751 systemd[1]: cri-containerd-ff07a1babcbdd8e9080b2c5affe510b51e7292a2b0d4232a142db2d7e3fc0d50.scope: Deactivated successfully. Jan 17 00:43:08.637022 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:43:08.717924 containerd[1467]: time="2026-01-17T00:43:08.715934463Z" level=info msg="shim disconnected" id=ff07a1babcbdd8e9080b2c5affe510b51e7292a2b0d4232a142db2d7e3fc0d50 namespace=k8s.io Jan 17 00:43:08.717924 containerd[1467]: time="2026-01-17T00:43:08.715996688Z" level=warning msg="cleaning up after shim disconnected" id=ff07a1babcbdd8e9080b2c5affe510b51e7292a2b0d4232a142db2d7e3fc0d50 namespace=k8s.io Jan 17 00:43:08.717924 containerd[1467]: time="2026-01-17T00:43:08.716011104Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:43:08.764228 containerd[1467]: time="2026-01-17T00:43:08.762622591Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:43:08Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:43:08.996065 kubelet[1803]: E0117 00:43:08.995357 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:09.129182 kubelet[1803]: E0117 00:43:09.128979 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:09.173139 containerd[1467]: time="2026-01-17T00:43:09.172949201Z" level=info msg="CreateContainer within sandbox \"5be3a35aaae6a0cf92af459474322fb904a17c39aad9e63f68c1fdb024629273\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:43:09.349918 containerd[1467]: time="2026-01-17T00:43:09.342168431Z" level=info msg="CreateContainer within sandbox \"5be3a35aaae6a0cf92af459474322fb904a17c39aad9e63f68c1fdb024629273\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"11b276455707613420b8a7c49e7aacbf263b7db1d6a20b4a480aae14ac43e92c\"" Jan 17 00:43:09.349918 containerd[1467]: time="2026-01-17T00:43:09.347318191Z" level=info msg="StartContainer for \"11b276455707613420b8a7c49e7aacbf263b7db1d6a20b4a480aae14ac43e92c\"" Jan 17 00:43:09.457184 systemd[1]: Started cri-containerd-11b276455707613420b8a7c49e7aacbf263b7db1d6a20b4a480aae14ac43e92c.scope - libcontainer container 11b276455707613420b8a7c49e7aacbf263b7db1d6a20b4a480aae14ac43e92c. Jan 17 00:43:09.570166 containerd[1467]: time="2026-01-17T00:43:09.568085366Z" level=info msg="StartContainer for \"11b276455707613420b8a7c49e7aacbf263b7db1d6a20b4a480aae14ac43e92c\" returns successfully" Jan 17 00:43:09.582549 systemd[1]: cri-containerd-11b276455707613420b8a7c49e7aacbf263b7db1d6a20b4a480aae14ac43e92c.scope: Deactivated successfully. Jan 17 00:43:09.767260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11b276455707613420b8a7c49e7aacbf263b7db1d6a20b4a480aae14ac43e92c-rootfs.mount: Deactivated successfully. Jan 17 00:43:09.767439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3241757145.mount: Deactivated successfully. Jan 17 00:43:09.837422 containerd[1467]: time="2026-01-17T00:43:09.837254912Z" level=info msg="shim disconnected" id=11b276455707613420b8a7c49e7aacbf263b7db1d6a20b4a480aae14ac43e92c namespace=k8s.io Jan 17 00:43:09.837422 containerd[1467]: time="2026-01-17T00:43:09.837323819Z" level=warning msg="cleaning up after shim disconnected" id=11b276455707613420b8a7c49e7aacbf263b7db1d6a20b4a480aae14ac43e92c namespace=k8s.io Jan 17 00:43:09.837422 containerd[1467]: time="2026-01-17T00:43:09.837335101Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:43:09.995940 kubelet[1803]: E0117 00:43:09.995642 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:10.143603 kubelet[1803]: E0117 00:43:10.143565 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:10.173353 containerd[1467]: time="2026-01-17T00:43:10.167658900Z" level=info msg="CreateContainer within sandbox \"5be3a35aaae6a0cf92af459474322fb904a17c39aad9e63f68c1fdb024629273\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:43:10.236148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount152152327.mount: Deactivated successfully. Jan 17 00:43:10.249891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1745220248.mount: Deactivated successfully. Jan 17 00:43:10.272561 containerd[1467]: time="2026-01-17T00:43:10.272386457Z" level=info msg="CreateContainer within sandbox \"5be3a35aaae6a0cf92af459474322fb904a17c39aad9e63f68c1fdb024629273\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4369ff3a38b0882f90e570756c78fbd3affd49ce8a40f948395c07d741f16cc4\"" Jan 17 00:43:10.274238 containerd[1467]: time="2026-01-17T00:43:10.274107042Z" level=info msg="StartContainer for \"4369ff3a38b0882f90e570756c78fbd3affd49ce8a40f948395c07d741f16cc4\"" Jan 17 00:43:10.343523 systemd[1]: Started cri-containerd-4369ff3a38b0882f90e570756c78fbd3affd49ce8a40f948395c07d741f16cc4.scope - libcontainer container 4369ff3a38b0882f90e570756c78fbd3affd49ce8a40f948395c07d741f16cc4. Jan 17 00:43:10.452752 systemd[1]: cri-containerd-4369ff3a38b0882f90e570756c78fbd3affd49ce8a40f948395c07d741f16cc4.scope: Deactivated successfully. Jan 17 00:43:10.473575 containerd[1467]: time="2026-01-17T00:43:10.473231546Z" level=info msg="StartContainer for \"4369ff3a38b0882f90e570756c78fbd3affd49ce8a40f948395c07d741f16cc4\" returns successfully" Jan 17 00:43:10.705144 containerd[1467]: time="2026-01-17T00:43:10.702286252Z" level=info msg="shim disconnected" id=4369ff3a38b0882f90e570756c78fbd3affd49ce8a40f948395c07d741f16cc4 namespace=k8s.io Jan 17 00:43:10.705144 containerd[1467]: time="2026-01-17T00:43:10.702385736Z" level=warning msg="cleaning up after shim disconnected" id=4369ff3a38b0882f90e570756c78fbd3affd49ce8a40f948395c07d741f16cc4 namespace=k8s.io Jan 17 00:43:10.705144 containerd[1467]: time="2026-01-17T00:43:10.702402017Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:43:11.011052 kubelet[1803]: E0117 00:43:11.004959 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:11.159217 kubelet[1803]: E0117 00:43:11.158104 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:11.185632 containerd[1467]: time="2026-01-17T00:43:11.185310801Z" level=info msg="CreateContainer within sandbox \"5be3a35aaae6a0cf92af459474322fb904a17c39aad9e63f68c1fdb024629273\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:43:11.244320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2377661768.mount: Deactivated successfully. Jan 17 00:43:11.279959 containerd[1467]: time="2026-01-17T00:43:11.279129776Z" level=info msg="CreateContainer within sandbox \"5be3a35aaae6a0cf92af459474322fb904a17c39aad9e63f68c1fdb024629273\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"01a7eaefe9fb1fcdc4707b1b3b21c1e42f6a986a3fbc77362a9e9510c31ce56b\"" Jan 17 00:43:11.285405 containerd[1467]: time="2026-01-17T00:43:11.281280355Z" level=info msg="StartContainer for \"01a7eaefe9fb1fcdc4707b1b3b21c1e42f6a986a3fbc77362a9e9510c31ce56b\"" Jan 17 00:43:11.442892 systemd[1]: Started cri-containerd-01a7eaefe9fb1fcdc4707b1b3b21c1e42f6a986a3fbc77362a9e9510c31ce56b.scope - libcontainer container 01a7eaefe9fb1fcdc4707b1b3b21c1e42f6a986a3fbc77362a9e9510c31ce56b. Jan 17 00:43:11.589928 containerd[1467]: time="2026-01-17T00:43:11.589723969Z" level=info msg="StartContainer for \"01a7eaefe9fb1fcdc4707b1b3b21c1e42f6a986a3fbc77362a9e9510c31ce56b\" returns successfully" Jan 17 00:43:11.982682 kubelet[1803]: I0117 00:43:11.980762 1803 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:43:12.009587 kubelet[1803]: E0117 00:43:12.009410 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:12.187900 kubelet[1803]: E0117 00:43:12.187455 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:12.292893 kubelet[1803]: I0117 00:43:12.288355 1803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7mq9k" podStartSLOduration=8.142796186 podStartE2EDuration="43.288275938s" podCreationTimestamp="2026-01-17 00:42:29 +0000 UTC" firstStartedPulling="2026-01-17 00:42:31.526305999 +0000 UTC m=+4.825611016" lastFinishedPulling="2026-01-17 00:43:06.671785883 +0000 UTC m=+39.971090768" observedRunningTime="2026-01-17 00:43:12.278136563 +0000 UTC m=+45.577441449" watchObservedRunningTime="2026-01-17 00:43:12.288275938 +0000 UTC m=+45.587580824" Jan 17 00:43:12.298348 containerd[1467]: time="2026-01-17T00:43:12.294700129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:12.298348 containerd[1467]: time="2026-01-17T00:43:12.297299002Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Jan 17 00:43:12.300166 containerd[1467]: time="2026-01-17T00:43:12.300106883Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:12.315910 containerd[1467]: time="2026-01-17T00:43:12.313085456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:12.318398 containerd[1467]: time="2026-01-17T00:43:12.316968172Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 5.642343527s" Jan 17 00:43:12.320506 containerd[1467]: time="2026-01-17T00:43:12.318722067Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 17 00:43:12.341757 containerd[1467]: time="2026-01-17T00:43:12.341612824Z" level=info msg="CreateContainer within sandbox \"7394e560fa90a358da112a4a9b3184bd2e0c0ef184a7e0d771219a794180c7e7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:43:12.392071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3352822100.mount: Deactivated successfully. Jan 17 00:43:12.403534 containerd[1467]: time="2026-01-17T00:43:12.403354004Z" level=info msg="CreateContainer within sandbox \"7394e560fa90a358da112a4a9b3184bd2e0c0ef184a7e0d771219a794180c7e7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e56b18f1dae1bf54710caf9c502bfb7c5f94b0a1e9ed3b6b282562b7420a72d4\"" Jan 17 00:43:12.404757 containerd[1467]: time="2026-01-17T00:43:12.404669414Z" level=info msg="StartContainer for \"e56b18f1dae1bf54710caf9c502bfb7c5f94b0a1e9ed3b6b282562b7420a72d4\"" Jan 17 00:43:12.498600 systemd[1]: Started cri-containerd-e56b18f1dae1bf54710caf9c502bfb7c5f94b0a1e9ed3b6b282562b7420a72d4.scope - libcontainer container e56b18f1dae1bf54710caf9c502bfb7c5f94b0a1e9ed3b6b282562b7420a72d4. Jan 17 00:43:12.636481 containerd[1467]: time="2026-01-17T00:43:12.636296477Z" level=info msg="StartContainer for \"e56b18f1dae1bf54710caf9c502bfb7c5f94b0a1e9ed3b6b282562b7420a72d4\" returns successfully" Jan 17 00:43:13.022507 kubelet[1803]: E0117 00:43:13.022142 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:13.207225 kubelet[1803]: E0117 00:43:13.205320 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:13.207225 kubelet[1803]: E0117 00:43:13.207147 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:14.025326 kubelet[1803]: E0117 00:43:14.024848 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:14.208536 kubelet[1803]: E0117 00:43:14.208450 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:14.211008 kubelet[1803]: E0117 00:43:14.209453 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:15.025948 kubelet[1803]: E0117 00:43:15.025680 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:15.214875 kubelet[1803]: E0117 00:43:15.213701 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:16.027879 kubelet[1803]: E0117 00:43:16.027341 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:16.165063 kubelet[1803]: I0117 00:43:16.164014 1803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jdvz4" podStartSLOduration=6.399922544 podStartE2EDuration="47.16398794s" podCreationTimestamp="2026-01-17 00:42:29 +0000 UTC" firstStartedPulling="2026-01-17 00:42:31.560523888 +0000 UTC m=+4.859828784" lastFinishedPulling="2026-01-17 00:43:12.324589294 +0000 UTC m=+45.623894180" observedRunningTime="2026-01-17 00:43:13.261234791 +0000 UTC m=+46.560539677" watchObservedRunningTime="2026-01-17 00:43:16.16398794 +0000 UTC m=+49.463292826" Jan 17 00:43:16.206089 systemd[1]: Created slice kubepods-besteffort-pod5afdf435_8b02_4398_9819_bf195cfcb270.slice - libcontainer container kubepods-besteffort-pod5afdf435_8b02_4398_9819_bf195cfcb270.slice. Jan 17 00:43:16.233566 kubelet[1803]: I0117 00:43:16.233447 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx9tb\" (UniqueName: \"kubernetes.io/projected/5afdf435-8b02-4398-9819-bf195cfcb270-kube-api-access-zx9tb\") pod \"nginx-deployment-7fcdb87857-zb84p\" (UID: \"5afdf435-8b02-4398-9819-bf195cfcb270\") " pod="default/nginx-deployment-7fcdb87857-zb84p" Jan 17 00:43:16.520857 containerd[1467]: time="2026-01-17T00:43:16.518207220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-zb84p,Uid:5afdf435-8b02-4398-9819-bf195cfcb270,Namespace:default,Attempt:0,}" Jan 17 00:43:17.031660 kubelet[1803]: E0117 00:43:17.031533 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:18.032324 kubelet[1803]: E0117 00:43:18.032206 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:19.036090 kubelet[1803]: E0117 00:43:19.033984 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:20.036584 kubelet[1803]: E0117 00:43:20.035037 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:21.037179 kubelet[1803]: E0117 00:43:21.037006 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:22.039214 kubelet[1803]: E0117 00:43:22.039070 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:23.040160 kubelet[1803]: E0117 00:43:23.039942 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:24.053576 kubelet[1803]: E0117 00:43:24.049651 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:25.071295 kubelet[1803]: E0117 00:43:25.068108 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:26.119930 kubelet[1803]: E0117 00:43:26.106071 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:27.150293 kubelet[1803]: E0117 00:43:27.132174 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:28.209155 kubelet[1803]: E0117 00:43:28.196603 1803 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.308s" Jan 17 00:43:28.436640 kubelet[1803]: E0117 00:43:28.211476 1803 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:28.436640 kubelet[1803]: E0117 00:43:28.216014 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:29.220496 kubelet[1803]: E0117 00:43:29.218984 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:30.246261 kubelet[1803]: E0117 00:43:30.238337 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:31.392628 kubelet[1803]: E0117 00:43:31.322254 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:32.811559 kubelet[1803]: E0117 00:43:32.584561 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:33.638344 kubelet[1803]: E0117 00:43:33.637597 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:34.713524 kubelet[1803]: E0117 00:43:34.694909 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:35.753461 kubelet[1803]: E0117 00:43:35.742139 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:36.836928 kubelet[1803]: E0117 00:43:36.805370 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:37.918385 kubelet[1803]: E0117 00:43:37.908082 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:38.915242 kubelet[1803]: E0117 00:43:38.914556 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:39.946986 kubelet[1803]: E0117 00:43:39.942478 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:41.001531 kubelet[1803]: E0117 00:43:40.952874 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:42.002204 kubelet[1803]: E0117 00:43:41.991087 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:43.154802 kubelet[1803]: E0117 00:43:43.153378 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:45.414781 kubelet[1803]: E0117 00:43:45.414126 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:45.902772 kubelet[1803]: E0117 00:43:45.741994 1803 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.589s" Jan 17 00:43:47.119377 kubelet[1803]: E0117 00:43:47.102362 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:48.125754 kubelet[1803]: E0117 00:43:48.124735 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:48.147255 kubelet[1803]: E0117 00:43:48.147133 1803 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:48.912393 kernel: Initializing XFRM netlink socket Jan 17 00:43:49.125386 kubelet[1803]: E0117 00:43:49.125298 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:50.133119 kubelet[1803]: E0117 00:43:50.130644 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:51.131542 kubelet[1803]: E0117 00:43:51.131365 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:52.133422 kubelet[1803]: E0117 00:43:52.133089 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:53.134747 kubelet[1803]: E0117 00:43:53.133585 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:54.140850 kubelet[1803]: E0117 00:43:54.139788 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:55.144936 kubelet[1803]: E0117 00:43:55.144762 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:56.148251 kubelet[1803]: E0117 00:43:56.146638 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:57.154355 kubelet[1803]: E0117 00:43:57.153935 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:58.155603 kubelet[1803]: E0117 00:43:58.155289 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:43:59.157078 kubelet[1803]: E0117 00:43:59.156384 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:00.165363 kubelet[1803]: E0117 00:44:00.157120 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:01.166475 kubelet[1803]: E0117 00:44:01.165431 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:02.167283 kubelet[1803]: E0117 00:44:02.165993 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:03.169185 kubelet[1803]: E0117 00:44:03.168405 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:04.170454 kubelet[1803]: E0117 00:44:04.169415 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:05.173556 kubelet[1803]: E0117 00:44:05.170544 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:06.173794 kubelet[1803]: E0117 00:44:06.173660 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:07.175926 kubelet[1803]: E0117 00:44:07.174066 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:08.142043 kubelet[1803]: E0117 00:44:08.141944 1803 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:08.176504 kubelet[1803]: E0117 00:44:08.175051 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:09.177390 kubelet[1803]: E0117 00:44:09.177001 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:10.181778 kubelet[1803]: E0117 00:44:10.179591 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:11.182984 kubelet[1803]: E0117 00:44:11.182520 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:12.183664 kubelet[1803]: E0117 00:44:12.183495 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:13.183757 kubelet[1803]: E0117 00:44:13.183684 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:14.185369 kubelet[1803]: E0117 00:44:14.185264 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:15.186692 kubelet[1803]: E0117 00:44:15.186528 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:16.187633 kubelet[1803]: E0117 00:44:16.187555 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:16.196487 systemd-networkd[1390]: cilium_host: Link UP Jan 17 00:44:16.196947 systemd-networkd[1390]: cilium_net: Link UP Jan 17 00:44:16.196955 systemd-networkd[1390]: cilium_net: Gained carrier Jan 17 00:44:16.197385 systemd-networkd[1390]: cilium_host: Gained carrier Jan 17 00:44:16.197790 systemd-networkd[1390]: cilium_host: Gained IPv6LL Jan 17 00:44:16.509546 systemd-networkd[1390]: cilium_vxlan: Link UP Jan 17 00:44:16.509554 systemd-networkd[1390]: cilium_vxlan: Gained carrier Jan 17 00:44:16.868017 systemd-networkd[1390]: cilium_net: Gained IPv6LL Jan 17 00:44:17.322924 kubelet[1803]: E0117 00:44:17.250480 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:17.645011 kernel: NET: Registered PF_ALG protocol family Jan 17 00:44:17.844301 containerd[1467]: time="2026-01-17T00:44:17.840424563Z" level=error msg="Failed to destroy network for sandbox \"be6923282c3fc4a86b4e7770c2cdc1d6af6c3d3c5e058f15bb3fcd2b134a506b\"" error="plugin type=\"cilium-cni\" name=\"cilium\" failed (delete): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Jan 17 00:44:17.881964 containerd[1467]: time="2026-01-17T00:44:17.875668473Z" level=error msg="encountered an error cleaning up failed sandbox \"be6923282c3fc4a86b4e7770c2cdc1d6af6c3d3c5e058f15bb3fcd2b134a506b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"cilium-cni\" name=\"cilium\" failed (delete): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Jan 17 00:44:17.881964 containerd[1467]: time="2026-01-17T00:44:17.875950191Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-zb84p,Uid:5afdf435-8b02-4398-9819-bf195cfcb270,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"be6923282c3fc4a86b4e7770c2cdc1d6af6c3d3c5e058f15bb3fcd2b134a506b\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Jan 17 00:44:17.882621 kubelet[1803]: E0117 00:44:17.882183 1803 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 17 00:44:17.882621 kubelet[1803]: rpc error: code = Unknown desc = failed to setup network for sandbox "be6923282c3fc4a86b4e7770c2cdc1d6af6c3d3c5e058f15bb3fcd2b134a506b": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Jan 17 00:44:17.882621 kubelet[1803]: Is the agent running? Jan 17 00:44:17.882621 kubelet[1803]: > Jan 17 00:44:17.882621 kubelet[1803]: E0117 00:44:17.882539 1803 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err=< Jan 17 00:44:17.882621 kubelet[1803]: rpc error: code = Unknown desc = failed to setup network for sandbox "be6923282c3fc4a86b4e7770c2cdc1d6af6c3d3c5e058f15bb3fcd2b134a506b": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Jan 17 00:44:17.882621 kubelet[1803]: Is the agent running? Jan 17 00:44:17.882621 kubelet[1803]: > pod="default/nginx-deployment-7fcdb87857-zb84p" Jan 17 00:44:17.883000 kubelet[1803]: E0117 00:44:17.882643 1803 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err=< Jan 17 00:44:17.883000 kubelet[1803]: rpc error: code = Unknown desc = failed to setup network for sandbox "be6923282c3fc4a86b4e7770c2cdc1d6af6c3d3c5e058f15bb3fcd2b134a506b": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Jan 17 00:44:17.883000 kubelet[1803]: Is the agent running? Jan 17 00:44:17.883000 kubelet[1803]: > pod="default/nginx-deployment-7fcdb87857-zb84p" Jan 17 00:44:17.895977 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-be6923282c3fc4a86b4e7770c2cdc1d6af6c3d3c5e058f15bb3fcd2b134a506b-shm.mount: Deactivated successfully. Jan 17 00:44:17.896236 systemd-networkd[1390]: cilium_vxlan: Gained IPv6LL Jan 17 00:44:17.922385 kubelet[1803]: E0117 00:44:17.920561 1803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-zb84p_default(5afdf435-8b02-4398-9819-bf195cfcb270)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-zb84p_default(5afdf435-8b02-4398-9819-bf195cfcb270)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be6923282c3fc4a86b4e7770c2cdc1d6af6c3d3c5e058f15bb3fcd2b134a506b\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \\\"http:///var/run/cilium/cilium.sock/v1/config\\\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\\nIs the agent running?\"" pod="default/nginx-deployment-7fcdb87857-zb84p" podUID="5afdf435-8b02-4398-9819-bf195cfcb270" Jan 17 00:44:18.277116 kubelet[1803]: E0117 00:44:18.256001 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:18.801977 kubelet[1803]: I0117 00:44:18.801391 1803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be6923282c3fc4a86b4e7770c2cdc1d6af6c3d3c5e058f15bb3fcd2b134a506b" Jan 17 00:44:18.806268 containerd[1467]: time="2026-01-17T00:44:18.806145064Z" level=info msg="StopPodSandbox for \"be6923282c3fc4a86b4e7770c2cdc1d6af6c3d3c5e058f15bb3fcd2b134a506b\"" Jan 17 00:44:18.806855 containerd[1467]: time="2026-01-17T00:44:18.806415582Z" level=info msg="Ensure that sandbox be6923282c3fc4a86b4e7770c2cdc1d6af6c3d3c5e058f15bb3fcd2b134a506b in task-service has been cleanup successfully" Jan 17 00:44:19.281945 kubelet[1803]: E0117 00:44:19.281767 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:20.285153 kubelet[1803]: E0117 00:44:20.285045 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:20.864677 kubelet[1803]: E0117 00:44:20.857109 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:21.069013 systemd-networkd[1390]: lxc_health: Link UP Jan 17 00:44:21.119059 systemd-networkd[1390]: lxc_health: Gained carrier Jan 17 00:44:21.296540 kubelet[1803]: E0117 00:44:21.292999 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:21.612119 kubelet[1803]: E0117 00:44:21.604304 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:21.899734 kubelet[1803]: E0117 00:44:21.895330 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:21.946892 containerd[1467]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Jan 17 00:44:21.946892 containerd[1467]: time="2026-01-17T00:44:21.937045012Z" level=info msg="TearDown network for sandbox \"be6923282c3fc4a86b4e7770c2cdc1d6af6c3d3c5e058f15bb3fcd2b134a506b\" successfully" Jan 17 00:44:21.946892 containerd[1467]: time="2026-01-17T00:44:21.937080288Z" level=info msg="StopPodSandbox for \"be6923282c3fc4a86b4e7770c2cdc1d6af6c3d3c5e058f15bb3fcd2b134a506b\" returns successfully" Jan 17 00:44:21.946892 containerd[1467]: time="2026-01-17T00:44:21.938083752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-zb84p,Uid:5afdf435-8b02-4398-9819-bf195cfcb270,Namespace:default,Attempt:1,}" Jan 17 00:44:21.951777 systemd[1]: run-netns-cni\x2d897f65a5\x2da6c4\x2d2702\x2dbdf0\x2dbb711631acea.mount: Deactivated successfully. Jan 17 00:44:22.180156 systemd-networkd[1390]: lxcb6ac1c656023: Link UP Jan 17 00:44:22.317721 kernel: eth0: renamed from tmpc0d4f Jan 17 00:44:22.373514 kubelet[1803]: E0117 00:44:22.373360 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:22.386383 systemd-networkd[1390]: lxcb6ac1c656023: Gained carrier Jan 17 00:44:22.426585 systemd-networkd[1390]: lxc_health: Gained IPv6LL Jan 17 00:44:23.377529 kubelet[1803]: E0117 00:44:23.376576 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:23.690583 systemd-networkd[1390]: lxcb6ac1c656023: Gained IPv6LL Jan 17 00:44:24.384586 kubelet[1803]: E0117 00:44:24.380128 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:25.385675 kubelet[1803]: E0117 00:44:25.384548 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:26.388331 kubelet[1803]: E0117 00:44:26.387462 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:27.395563 kubelet[1803]: E0117 00:44:27.394606 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:28.146295 kubelet[1803]: E0117 00:44:28.142340 1803 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:28.398932 kubelet[1803]: E0117 00:44:28.398209 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:29.402941 kubelet[1803]: E0117 00:44:29.400095 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:30.414062 kubelet[1803]: E0117 00:44:30.412661 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:31.575125 kubelet[1803]: E0117 00:44:31.574104 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:32.629168 kubelet[1803]: E0117 00:44:32.619465 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:32.711771 containerd[1467]: time="2026-01-17T00:44:32.710147077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:44:32.711771 containerd[1467]: time="2026-01-17T00:44:32.710293456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:44:32.711771 containerd[1467]: time="2026-01-17T00:44:32.710366263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:44:32.711771 containerd[1467]: time="2026-01-17T00:44:32.710781291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:44:33.023452 systemd[1]: Started cri-containerd-c0d4f39a75d79cb26700f267432909a051bd7ee275080f65453fa1d17637dfd9.scope - libcontainer container c0d4f39a75d79cb26700f267432909a051bd7ee275080f65453fa1d17637dfd9. Jan 17 00:44:33.098186 systemd-resolved[1393]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:44:33.367639 containerd[1467]: time="2026-01-17T00:44:33.367498558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-zb84p,Uid:5afdf435-8b02-4398-9819-bf195cfcb270,Namespace:default,Attempt:1,} returns sandbox id \"c0d4f39a75d79cb26700f267432909a051bd7ee275080f65453fa1d17637dfd9\"" Jan 17 00:44:33.375803 containerd[1467]: time="2026-01-17T00:44:33.371610735Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 00:44:33.700771 kubelet[1803]: E0117 00:44:33.628107 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:34.632198 kubelet[1803]: E0117 00:44:34.631237 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:35.910863 kubelet[1803]: E0117 00:44:35.909967 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:37.045791 kubelet[1803]: E0117 00:44:37.044949 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:38.048991 kubelet[1803]: E0117 00:44:38.046894 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:39.048170 kubelet[1803]: E0117 00:44:39.047684 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:39.872218 kubelet[1803]: E0117 00:44:39.871644 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:40.052403 kubelet[1803]: E0117 00:44:40.051891 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:41.053627 kubelet[1803]: E0117 00:44:41.052379 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:42.057113 kubelet[1803]: E0117 00:44:42.056522 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:43.059494 kubelet[1803]: E0117 00:44:43.058014 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:44.143120 kubelet[1803]: E0117 00:44:44.080038 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:45.133272 kubelet[1803]: E0117 00:44:45.132424 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:45.332061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3330719234.mount: Deactivated successfully. Jan 17 00:44:46.357410 kubelet[1803]: E0117 00:44:46.355066 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:47.389586 kubelet[1803]: E0117 00:44:47.370404 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:48.231568 kubelet[1803]: E0117 00:44:48.214185 1803 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:48.417316 kubelet[1803]: E0117 00:44:48.415179 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:49.422381 kubelet[1803]: E0117 00:44:49.421608 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:50.433019 kubelet[1803]: E0117 00:44:50.432039 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:51.434394 kubelet[1803]: E0117 00:44:51.433706 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:52.444084 kubelet[1803]: E0117 00:44:52.443092 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:53.456193 kubelet[1803]: E0117 00:44:53.455530 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:54.550012 containerd[1467]: time="2026-01-17T00:44:54.546251852Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:44:54.681541 containerd[1467]: time="2026-01-17T00:44:54.675140843Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=63840319" Jan 17 00:44:54.681613 kubelet[1803]: E0117 00:44:54.642910 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:54.701857 containerd[1467]: time="2026-01-17T00:44:54.699101624Z" level=info msg="ImageCreate event name:\"sha256:66e7a6d326b670d959427bd421ca82232b0b81b18d85abaecb4ab9823d35056e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:44:54.721046 containerd[1467]: time="2026-01-17T00:44:54.716250301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:892d1d54ab079b8cffa2317ccb45829886a0c3c3edbdf92bb286904b09797767\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:44:54.721046 containerd[1467]: time="2026-01-17T00:44:54.718090128Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:66e7a6d326b670d959427bd421ca82232b0b81b18d85abaecb4ab9823d35056e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:892d1d54ab079b8cffa2317ccb45829886a0c3c3edbdf92bb286904b09797767\", size \"63840197\" in 21.346437663s" Jan 17 00:44:54.721046 containerd[1467]: time="2026-01-17T00:44:54.718179547Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:66e7a6d326b670d959427bd421ca82232b0b81b18d85abaecb4ab9823d35056e\"" Jan 17 00:44:54.745158 containerd[1467]: time="2026-01-17T00:44:54.744772531Z" level=info msg="CreateContainer within sandbox \"c0d4f39a75d79cb26700f267432909a051bd7ee275080f65453fa1d17637dfd9\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 17 00:44:54.839005 containerd[1467]: time="2026-01-17T00:44:54.838054531Z" level=info msg="CreateContainer within sandbox \"c0d4f39a75d79cb26700f267432909a051bd7ee275080f65453fa1d17637dfd9\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"d27a60f5726131439995b13391632658ca22338e42cc1efcf1029eec0e775922\"" Jan 17 00:44:54.841652 containerd[1467]: time="2026-01-17T00:44:54.841620741Z" level=info msg="StartContainer for \"d27a60f5726131439995b13391632658ca22338e42cc1efcf1029eec0e775922\"" Jan 17 00:44:55.177770 systemd[1]: Started cri-containerd-d27a60f5726131439995b13391632658ca22338e42cc1efcf1029eec0e775922.scope - libcontainer container d27a60f5726131439995b13391632658ca22338e42cc1efcf1029eec0e775922. Jan 17 00:44:55.411054 containerd[1467]: time="2026-01-17T00:44:55.409407420Z" level=info msg="StartContainer for \"d27a60f5726131439995b13391632658ca22338e42cc1efcf1029eec0e775922\" returns successfully" Jan 17 00:44:55.644876 kubelet[1803]: E0117 00:44:55.644655 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:56.105439 kubelet[1803]: I0117 00:44:56.096578 1803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-zb84p" podStartSLOduration=78.738679768 podStartE2EDuration="1m40.096471327s" podCreationTimestamp="2026-01-17 00:43:16 +0000 UTC" firstStartedPulling="2026-01-17 00:44:33.370007036 +0000 UTC m=+126.669311942" lastFinishedPulling="2026-01-17 00:44:54.727798595 +0000 UTC m=+148.027103501" observedRunningTime="2026-01-17 00:44:56.096358261 +0000 UTC m=+149.395663147" watchObservedRunningTime="2026-01-17 00:44:56.096471327 +0000 UTC m=+149.395776213" Jan 17 00:44:56.649063 kubelet[1803]: E0117 00:44:56.646955 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:57.649550 kubelet[1803]: E0117 00:44:57.648301 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:58.652228 kubelet[1803]: E0117 00:44:58.651239 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:44:59.652093 kubelet[1803]: E0117 00:44:59.651983 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:00.657182 kubelet[1803]: E0117 00:45:00.653350 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:01.562352 kubelet[1803]: I0117 00:45:01.557734 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/ed5badc8-3c93-4b7f-a325-11123f9b6e58-data\") pod \"nfs-server-provisioner-0\" (UID: \"ed5badc8-3c93-4b7f-a325-11123f9b6e58\") " pod="default/nfs-server-provisioner-0" Jan 17 00:45:01.562352 kubelet[1803]: I0117 00:45:01.557871 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v5hg\" (UniqueName: \"kubernetes.io/projected/ed5badc8-3c93-4b7f-a325-11123f9b6e58-kube-api-access-6v5hg\") pod \"nfs-server-provisioner-0\" (UID: \"ed5badc8-3c93-4b7f-a325-11123f9b6e58\") " pod="default/nfs-server-provisioner-0" Jan 17 00:45:01.560492 systemd[1]: Created slice kubepods-besteffort-poded5badc8_3c93_4b7f_a325_11123f9b6e58.slice - libcontainer container kubepods-besteffort-poded5badc8_3c93_4b7f_a325_11123f9b6e58.slice. Jan 17 00:45:01.655995 kubelet[1803]: E0117 00:45:01.655551 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:01.884570 containerd[1467]: time="2026-01-17T00:45:01.884035965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ed5badc8-3c93-4b7f-a325-11123f9b6e58,Namespace:default,Attempt:0,}" Jan 17 00:45:02.113886 systemd-networkd[1390]: lxc189e54765114: Link UP Jan 17 00:45:02.157476 kernel: eth0: renamed from tmpfbbc0 Jan 17 00:45:02.172537 systemd-networkd[1390]: lxc189e54765114: Gained carrier Jan 17 00:45:02.656274 kubelet[1803]: E0117 00:45:02.656201 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:02.978975 containerd[1467]: time="2026-01-17T00:45:02.978004664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:45:02.978975 containerd[1467]: time="2026-01-17T00:45:02.978170346Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:45:02.978975 containerd[1467]: time="2026-01-17T00:45:02.978192799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:45:02.978975 containerd[1467]: time="2026-01-17T00:45:02.978363200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:45:03.107412 systemd[1]: Started cri-containerd-fbbc0ace5ab2bd99e66fbe5f826053b5dbdad803879d51b21c846e49cc0fc95b.scope - libcontainer container fbbc0ace5ab2bd99e66fbe5f826053b5dbdad803879d51b21c846e49cc0fc95b. Jan 17 00:45:03.155105 systemd-resolved[1393]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:45:03.293323 containerd[1467]: time="2026-01-17T00:45:03.293200509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ed5badc8-3c93-4b7f-a325-11123f9b6e58,Namespace:default,Attempt:0,} returns sandbox id \"fbbc0ace5ab2bd99e66fbe5f826053b5dbdad803879d51b21c846e49cc0fc95b\"" Jan 17 00:45:03.319339 containerd[1467]: time="2026-01-17T00:45:03.313431413Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 17 00:45:03.656443 kubelet[1803]: E0117 00:45:03.656333 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:03.792013 systemd-networkd[1390]: lxc189e54765114: Gained IPv6LL Jan 17 00:45:04.664438 kubelet[1803]: E0117 00:45:04.663672 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:05.668078 kubelet[1803]: E0117 00:45:05.666870 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:06.667137 kubelet[1803]: E0117 00:45:06.667071 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:07.683963 kubelet[1803]: E0117 00:45:07.680664 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:09.808873 kubelet[1803]: E0117 00:45:09.733696 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:09.808873 kubelet[1803]: E0117 00:45:09.818713 1803 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:10.797199 kubelet[1803]: E0117 00:45:10.796143 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:11.824101 kubelet[1803]: E0117 00:45:11.815114 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:12.822689 kubelet[1803]: E0117 00:45:12.821483 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:13.825608 kubelet[1803]: E0117 00:45:13.824942 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:14.855537 kubelet[1803]: E0117 00:45:14.840552 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:15.850459 kubelet[1803]: E0117 00:45:15.850239 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:15.855621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2528916843.mount: Deactivated successfully. Jan 17 00:45:16.852598 kubelet[1803]: E0117 00:45:16.852099 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:17.905707 kubelet[1803]: E0117 00:45:17.903378 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:18.917884 kubelet[1803]: E0117 00:45:18.917183 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:19.987150 kubelet[1803]: E0117 00:45:19.984099 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:20.292189 kubelet[1803]: E0117 00:45:20.114888 1803 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.198s" Jan 17 00:45:21.122632 kubelet[1803]: E0117 00:45:21.122554 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:22.212341 kubelet[1803]: E0117 00:45:22.211426 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:23.216006 kubelet[1803]: E0117 00:45:23.215176 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:24.220900 kubelet[1803]: E0117 00:45:24.220759 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:25.222937 kubelet[1803]: E0117 00:45:25.220958 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:26.227189 kubelet[1803]: E0117 00:45:26.221547 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:27.243461 kubelet[1803]: E0117 00:45:27.242727 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:28.173437 kubelet[1803]: E0117 00:45:28.172465 1803 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:28.254967 kubelet[1803]: E0117 00:45:28.251606 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:28.701594 containerd[1467]: time="2026-01-17T00:45:28.701480948Z" level=info msg="StopPodSandbox for \"be6923282c3fc4a86b4e7770c2cdc1d6af6c3d3c5e058f15bb3fcd2b134a506b\"" Jan 17 00:45:28.800282 containerd[1467]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Jan 17 00:45:28.800282 containerd[1467]: level=warning msg="Unable to enter namespace \"\", will not delete interface" error="failed to Statfs \"\": no such file or directory" subsys=cilium-cni Jan 17 00:45:28.800282 containerd[1467]: time="2026-01-17T00:45:28.797107915Z" level=info msg="TearDown network for sandbox \"be6923282c3fc4a86b4e7770c2cdc1d6af6c3d3c5e058f15bb3fcd2b134a506b\" successfully" Jan 17 00:45:28.800282 containerd[1467]: time="2026-01-17T00:45:28.797152229Z" level=info msg="StopPodSandbox for \"be6923282c3fc4a86b4e7770c2cdc1d6af6c3d3c5e058f15bb3fcd2b134a506b\" returns successfully" Jan 17 00:45:28.801140 containerd[1467]: time="2026-01-17T00:45:28.800846472Z" level=info msg="RemovePodSandbox for \"be6923282c3fc4a86b4e7770c2cdc1d6af6c3d3c5e058f15bb3fcd2b134a506b\"" Jan 17 00:45:28.801140 containerd[1467]: time="2026-01-17T00:45:28.800888991Z" level=info msg="Forcibly stopping sandbox \"be6923282c3fc4a86b4e7770c2cdc1d6af6c3d3c5e058f15bb3fcd2b134a506b\"" Jan 17 00:45:28.857946 containerd[1467]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Jan 17 00:45:28.857946 containerd[1467]: level=warning msg="Unable to enter namespace \"\", will not delete interface" error="failed to Statfs \"\": no such file or directory" subsys=cilium-cni Jan 17 00:45:28.857946 containerd[1467]: time="2026-01-17T00:45:28.855324255Z" level=info msg="TearDown network for sandbox \"be6923282c3fc4a86b4e7770c2cdc1d6af6c3d3c5e058f15bb3fcd2b134a506b\" successfully" Jan 17 00:45:28.916947 containerd[1467]: time="2026-01-17T00:45:28.916154511Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"be6923282c3fc4a86b4e7770c2cdc1d6af6c3d3c5e058f15bb3fcd2b134a506b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:45:28.916947 containerd[1467]: time="2026-01-17T00:45:28.916622562Z" level=info msg="RemovePodSandbox \"be6923282c3fc4a86b4e7770c2cdc1d6af6c3d3c5e058f15bb3fcd2b134a506b\" returns successfully" Jan 17 00:45:29.256685 kubelet[1803]: E0117 00:45:29.256392 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:30.257318 kubelet[1803]: E0117 00:45:30.256582 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:30.419562 containerd[1467]: time="2026-01-17T00:45:30.419432272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:45:30.422194 containerd[1467]: time="2026-01-17T00:45:30.421791906Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 17 00:45:30.429912 containerd[1467]: time="2026-01-17T00:45:30.429765677Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:45:30.437851 containerd[1467]: time="2026-01-17T00:45:30.437659708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:45:30.441284 containerd[1467]: time="2026-01-17T00:45:30.441190437Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 27.127698661s" Jan 17 00:45:30.441284 containerd[1467]: time="2026-01-17T00:45:30.441256462Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 17 00:45:30.461365 containerd[1467]: time="2026-01-17T00:45:30.458933981Z" level=info msg="CreateContainer within sandbox \"fbbc0ace5ab2bd99e66fbe5f826053b5dbdad803879d51b21c846e49cc0fc95b\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 17 00:45:30.497178 containerd[1467]: time="2026-01-17T00:45:30.496080800Z" level=info msg="CreateContainer within sandbox \"fbbc0ace5ab2bd99e66fbe5f826053b5dbdad803879d51b21c846e49cc0fc95b\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"3af38d7d45265cc51e70e0b8fd06e479b423d84324644360fc2acb7bfc8c4798\"" Jan 17 00:45:30.497334 containerd[1467]: time="2026-01-17T00:45:30.497264232Z" level=info msg="StartContainer for \"3af38d7d45265cc51e70e0b8fd06e479b423d84324644360fc2acb7bfc8c4798\"" Jan 17 00:45:30.634922 systemd[1]: run-containerd-runc-k8s.io-3af38d7d45265cc51e70e0b8fd06e479b423d84324644360fc2acb7bfc8c4798-runc.xiAGof.mount: Deactivated successfully. Jan 17 00:45:30.675602 systemd[1]: Started cri-containerd-3af38d7d45265cc51e70e0b8fd06e479b423d84324644360fc2acb7bfc8c4798.scope - libcontainer container 3af38d7d45265cc51e70e0b8fd06e479b423d84324644360fc2acb7bfc8c4798. Jan 17 00:45:30.829396 containerd[1467]: time="2026-01-17T00:45:30.829252832Z" level=info msg="StartContainer for \"3af38d7d45265cc51e70e0b8fd06e479b423d84324644360fc2acb7bfc8c4798\" returns successfully" Jan 17 00:45:31.257947 kubelet[1803]: E0117 00:45:31.257804 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:31.772270 kubelet[1803]: I0117 00:45:31.770915 1803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=3.6374488339999997 podStartE2EDuration="30.770891969s" podCreationTimestamp="2026-01-17 00:45:01 +0000 UTC" firstStartedPulling="2026-01-17 00:45:03.312048293 +0000 UTC m=+156.611353179" lastFinishedPulling="2026-01-17 00:45:30.445491418 +0000 UTC m=+183.744796314" observedRunningTime="2026-01-17 00:45:31.769595833 +0000 UTC m=+185.068900760" watchObservedRunningTime="2026-01-17 00:45:31.770891969 +0000 UTC m=+185.070196886" Jan 17 00:45:32.272581 kubelet[1803]: E0117 00:45:32.271268 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:33.274098 kubelet[1803]: E0117 00:45:33.273942 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:34.369634 kubelet[1803]: E0117 00:45:34.369549 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:35.378308 kubelet[1803]: E0117 00:45:35.377020 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:36.379339 kubelet[1803]: E0117 00:45:36.378770 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:36.627264 systemd[1]: Created slice kubepods-besteffort-pod5add273d_baa8_4397_92d4_e2da19780f4e.slice - libcontainer container kubepods-besteffort-pod5add273d_baa8_4397_92d4_e2da19780f4e.slice. Jan 17 00:45:36.745158 kubelet[1803]: I0117 00:45:36.741089 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ae5ed494-8c70-4613-9964-2ee1faa2156b\" (UniqueName: \"kubernetes.io/nfs/5add273d-baa8-4397-92d4-e2da19780f4e-pvc-ae5ed494-8c70-4613-9964-2ee1faa2156b\") pod \"test-pod-1\" (UID: \"5add273d-baa8-4397-92d4-e2da19780f4e\") " pod="default/test-pod-1" Jan 17 00:45:36.745158 kubelet[1803]: I0117 00:45:36.741304 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbnks\" (UniqueName: \"kubernetes.io/projected/5add273d-baa8-4397-92d4-e2da19780f4e-kube-api-access-sbnks\") pod \"test-pod-1\" (UID: \"5add273d-baa8-4397-92d4-e2da19780f4e\") " pod="default/test-pod-1" Jan 17 00:45:37.109361 kernel: FS-Cache: Loaded Jan 17 00:45:37.383021 kubelet[1803]: E0117 00:45:37.380089 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:37.461649 kernel: RPC: Registered named UNIX socket transport module. Jan 17 00:45:37.467309 kernel: RPC: Registered udp transport module. Jan 17 00:45:37.467467 kernel: RPC: Registered tcp transport module. Jan 17 00:45:37.474539 kernel: RPC: Registered tcp-with-tls transport module. Jan 17 00:45:37.474629 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 17 00:45:38.284162 kernel: NFS: Registering the id_resolver key type Jan 17 00:45:38.286315 kernel: Key type id_resolver registered Jan 17 00:45:38.286426 kernel: Key type id_legacy registered Jan 17 00:45:38.396375 kubelet[1803]: E0117 00:45:38.394372 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:38.447659 nfsidmap[3308]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 17 00:45:38.509753 nfsidmap[3311]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 17 00:45:38.752648 containerd[1467]: time="2026-01-17T00:45:38.748761236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:5add273d-baa8-4397-92d4-e2da19780f4e,Namespace:default,Attempt:0,}" Jan 17 00:45:39.025593 systemd-networkd[1390]: lxc45a126bc8b09: Link UP Jan 17 00:45:39.070567 kernel: eth0: renamed from tmpfd53e Jan 17 00:45:39.114602 systemd-networkd[1390]: lxc45a126bc8b09: Gained carrier Jan 17 00:45:39.398173 kubelet[1803]: E0117 00:45:39.395229 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:39.803351 containerd[1467]: time="2026-01-17T00:45:39.796251325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:45:39.803351 containerd[1467]: time="2026-01-17T00:45:39.796412249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:45:39.803351 containerd[1467]: time="2026-01-17T00:45:39.796472332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:45:39.803351 containerd[1467]: time="2026-01-17T00:45:39.796595974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:45:39.957913 systemd[1]: Started cri-containerd-fd53ef3ed22d44b78885371441c052455356824758f498abf3867a02af132af7.scope - libcontainer container fd53ef3ed22d44b78885371441c052455356824758f498abf3867a02af132af7. Jan 17 00:45:40.123359 systemd-resolved[1393]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:45:40.251271 containerd[1467]: time="2026-01-17T00:45:40.246934062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:5add273d-baa8-4397-92d4-e2da19780f4e,Namespace:default,Attempt:0,} returns sandbox id \"fd53ef3ed22d44b78885371441c052455356824758f498abf3867a02af132af7\"" Jan 17 00:45:40.259071 containerd[1467]: time="2026-01-17T00:45:40.254758974Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 00:45:40.399275 kubelet[1803]: E0117 00:45:40.399097 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:40.448767 containerd[1467]: time="2026-01-17T00:45:40.446441504Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:45:40.452960 containerd[1467]: time="2026-01-17T00:45:40.452052598Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 17 00:45:40.475517 containerd[1467]: time="2026-01-17T00:45:40.466504695Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:66e7a6d326b670d959427bd421ca82232b0b81b18d85abaecb4ab9823d35056e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:892d1d54ab079b8cffa2317ccb45829886a0c3c3edbdf92bb286904b09797767\", size \"63840197\" in 211.698662ms" Jan 17 00:45:40.475517 containerd[1467]: time="2026-01-17T00:45:40.466597379Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:66e7a6d326b670d959427bd421ca82232b0b81b18d85abaecb4ab9823d35056e\"" Jan 17 00:45:40.495846 containerd[1467]: time="2026-01-17T00:45:40.495363784Z" level=info msg="CreateContainer within sandbox \"fd53ef3ed22d44b78885371441c052455356824758f498abf3867a02af132af7\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 17 00:45:40.589429 containerd[1467]: time="2026-01-17T00:45:40.589167977Z" level=info msg="CreateContainer within sandbox \"fd53ef3ed22d44b78885371441c052455356824758f498abf3867a02af132af7\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"0ff98ca463ca4f4f880a5c140610fc66df3ae11c7ab08066d4767499952102eb\"" Jan 17 00:45:40.594932 containerd[1467]: time="2026-01-17T00:45:40.590350586Z" level=info msg="StartContainer for \"0ff98ca463ca4f4f880a5c140610fc66df3ae11c7ab08066d4767499952102eb\"" Jan 17 00:45:40.634374 systemd-networkd[1390]: lxc45a126bc8b09: Gained IPv6LL Jan 17 00:45:40.743345 systemd[1]: Started cri-containerd-0ff98ca463ca4f4f880a5c140610fc66df3ae11c7ab08066d4767499952102eb.scope - libcontainer container 0ff98ca463ca4f4f880a5c140610fc66df3ae11c7ab08066d4767499952102eb. Jan 17 00:45:40.885088 containerd[1467]: time="2026-01-17T00:45:40.884445301Z" level=info msg="StartContainer for \"0ff98ca463ca4f4f880a5c140610fc66df3ae11c7ab08066d4767499952102eb\" returns successfully" Jan 17 00:45:41.402261 kubelet[1803]: E0117 00:45:41.399441 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:42.403276 kubelet[1803]: E0117 00:45:42.402681 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:43.412688 kubelet[1803]: E0117 00:45:43.411485 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:44.415400 kubelet[1803]: E0117 00:45:44.414736 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:45.449669 kubelet[1803]: E0117 00:45:45.448435 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:46.056635 kubelet[1803]: E0117 00:45:46.055984 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:46.452130 kubelet[1803]: E0117 00:45:46.450925 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:47.452368 kubelet[1803]: E0117 00:45:47.452189 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:48.142267 kubelet[1803]: E0117 00:45:48.141734 1803 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:48.459787 kubelet[1803]: E0117 00:45:48.453041 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:49.454402 kubelet[1803]: E0117 00:45:49.454215 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:50.331141 kubelet[1803]: I0117 00:45:50.330990 1803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=48.116246353 podStartE2EDuration="48.330966022s" podCreationTimestamp="2026-01-17 00:45:02 +0000 UTC" firstStartedPulling="2026-01-17 00:45:40.254038758 +0000 UTC m=+193.553343644" lastFinishedPulling="2026-01-17 00:45:40.468758427 +0000 UTC m=+193.768063313" observedRunningTime="2026-01-17 00:45:41.931482797 +0000 UTC m=+195.230787702" watchObservedRunningTime="2026-01-17 00:45:50.330966022 +0000 UTC m=+203.630270908" Jan 17 00:45:50.457383 kubelet[1803]: E0117 00:45:50.457321 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:50.479958 containerd[1467]: time="2026-01-17T00:45:50.479863850Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:45:50.510785 containerd[1467]: time="2026-01-17T00:45:50.510332677Z" level=info msg="StopContainer for \"01a7eaefe9fb1fcdc4707b1b3b21c1e42f6a986a3fbc77362a9e9510c31ce56b\" with timeout 2 (s)" Jan 17 00:45:50.517708 containerd[1467]: time="2026-01-17T00:45:50.514536311Z" level=info msg="Stop container \"01a7eaefe9fb1fcdc4707b1b3b21c1e42f6a986a3fbc77362a9e9510c31ce56b\" with signal terminated" Jan 17 00:45:50.565686 systemd-networkd[1390]: lxc_health: Link DOWN Jan 17 00:45:50.565720 systemd-networkd[1390]: lxc_health: Lost carrier Jan 17 00:45:50.627038 systemd[1]: cri-containerd-01a7eaefe9fb1fcdc4707b1b3b21c1e42f6a986a3fbc77362a9e9510c31ce56b.scope: Deactivated successfully. Jan 17 00:45:50.627777 systemd[1]: cri-containerd-01a7eaefe9fb1fcdc4707b1b3b21c1e42f6a986a3fbc77362a9e9510c31ce56b.scope: Consumed 23.193s CPU time. Jan 17 00:45:50.744660 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01a7eaefe9fb1fcdc4707b1b3b21c1e42f6a986a3fbc77362a9e9510c31ce56b-rootfs.mount: Deactivated successfully. Jan 17 00:45:50.878293 containerd[1467]: time="2026-01-17T00:45:50.878053191Z" level=info msg="shim disconnected" id=01a7eaefe9fb1fcdc4707b1b3b21c1e42f6a986a3fbc77362a9e9510c31ce56b namespace=k8s.io Jan 17 00:45:50.878293 containerd[1467]: time="2026-01-17T00:45:50.878143398Z" level=warning msg="cleaning up after shim disconnected" id=01a7eaefe9fb1fcdc4707b1b3b21c1e42f6a986a3fbc77362a9e9510c31ce56b namespace=k8s.io Jan 17 00:45:50.878293 containerd[1467]: time="2026-01-17T00:45:50.878156844Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:45:50.931348 containerd[1467]: time="2026-01-17T00:45:50.929124431Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:45:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:45:50.943462 containerd[1467]: time="2026-01-17T00:45:50.942991366Z" level=info msg="StopContainer for \"01a7eaefe9fb1fcdc4707b1b3b21c1e42f6a986a3fbc77362a9e9510c31ce56b\" returns successfully" Jan 17 00:45:50.944266 containerd[1467]: time="2026-01-17T00:45:50.944051916Z" level=info msg="StopPodSandbox for \"5be3a35aaae6a0cf92af459474322fb904a17c39aad9e63f68c1fdb024629273\"" Jan 17 00:45:50.944266 containerd[1467]: time="2026-01-17T00:45:50.944099093Z" level=info msg="Container to stop \"ff07a1babcbdd8e9080b2c5affe510b51e7292a2b0d4232a142db2d7e3fc0d50\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:45:50.944266 containerd[1467]: time="2026-01-17T00:45:50.944116766Z" level=info msg="Container to stop \"11b276455707613420b8a7c49e7aacbf263b7db1d6a20b4a480aae14ac43e92c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:45:50.944266 containerd[1467]: time="2026-01-17T00:45:50.944129680Z" level=info msg="Container to stop \"01a7eaefe9fb1fcdc4707b1b3b21c1e42f6a986a3fbc77362a9e9510c31ce56b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:45:50.944266 containerd[1467]: time="2026-01-17T00:45:50.944143366Z" level=info msg="Container to stop \"b571d8d54b95b62e526996aad53bc4c3cc646549f462d2f3d2a49eff6d315180\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:45:50.944266 containerd[1467]: time="2026-01-17T00:45:50.944156360Z" level=info msg="Container to stop \"4369ff3a38b0882f90e570756c78fbd3affd49ce8a40f948395c07d741f16cc4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:45:50.947229 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5be3a35aaae6a0cf92af459474322fb904a17c39aad9e63f68c1fdb024629273-shm.mount: Deactivated successfully. Jan 17 00:45:50.986620 systemd[1]: cri-containerd-5be3a35aaae6a0cf92af459474322fb904a17c39aad9e63f68c1fdb024629273.scope: Deactivated successfully. Jan 17 00:45:51.045380 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5be3a35aaae6a0cf92af459474322fb904a17c39aad9e63f68c1fdb024629273-rootfs.mount: Deactivated successfully. Jan 17 00:45:51.075119 containerd[1467]: time="2026-01-17T00:45:51.071229256Z" level=info msg="shim disconnected" id=5be3a35aaae6a0cf92af459474322fb904a17c39aad9e63f68c1fdb024629273 namespace=k8s.io Jan 17 00:45:51.075119 containerd[1467]: time="2026-01-17T00:45:51.071320396Z" level=warning msg="cleaning up after shim disconnected" id=5be3a35aaae6a0cf92af459474322fb904a17c39aad9e63f68c1fdb024629273 namespace=k8s.io Jan 17 00:45:51.075119 containerd[1467]: time="2026-01-17T00:45:51.071335533Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:45:51.099277 containerd[1467]: time="2026-01-17T00:45:51.099096370Z" level=info msg="TearDown network for sandbox \"5be3a35aaae6a0cf92af459474322fb904a17c39aad9e63f68c1fdb024629273\" successfully" Jan 17 00:45:51.099277 containerd[1467]: time="2026-01-17T00:45:51.099164096Z" level=info msg="StopPodSandbox for \"5be3a35aaae6a0cf92af459474322fb904a17c39aad9e63f68c1fdb024629273\" returns successfully" Jan 17 00:45:51.145370 kubelet[1803]: I0117 00:45:51.145321 1803 scope.go:117] "RemoveContainer" containerID="01a7eaefe9fb1fcdc4707b1b3b21c1e42f6a986a3fbc77362a9e9510c31ce56b" Jan 17 00:45:51.151750 containerd[1467]: time="2026-01-17T00:45:51.151697241Z" level=info msg="RemoveContainer for \"01a7eaefe9fb1fcdc4707b1b3b21c1e42f6a986a3fbc77362a9e9510c31ce56b\"" Jan 17 00:45:51.165226 containerd[1467]: time="2026-01-17T00:45:51.165179582Z" level=info msg="RemoveContainer for \"01a7eaefe9fb1fcdc4707b1b3b21c1e42f6a986a3fbc77362a9e9510c31ce56b\" returns successfully" Jan 17 00:45:51.166321 kubelet[1803]: I0117 00:45:51.166187 1803 scope.go:117] "RemoveContainer" containerID="4369ff3a38b0882f90e570756c78fbd3affd49ce8a40f948395c07d741f16cc4" Jan 17 00:45:51.169599 containerd[1467]: time="2026-01-17T00:45:51.169569063Z" level=info msg="RemoveContainer for \"4369ff3a38b0882f90e570756c78fbd3affd49ce8a40f948395c07d741f16cc4\"" Jan 17 00:45:51.175265 containerd[1467]: time="2026-01-17T00:45:51.175144229Z" level=info msg="RemoveContainer for \"4369ff3a38b0882f90e570756c78fbd3affd49ce8a40f948395c07d741f16cc4\" returns successfully" Jan 17 00:45:51.175493 kubelet[1803]: I0117 00:45:51.175404 1803 scope.go:117] "RemoveContainer" containerID="11b276455707613420b8a7c49e7aacbf263b7db1d6a20b4a480aae14ac43e92c" Jan 17 00:45:51.179689 containerd[1467]: time="2026-01-17T00:45:51.179656284Z" level=info msg="RemoveContainer for \"11b276455707613420b8a7c49e7aacbf263b7db1d6a20b4a480aae14ac43e92c\"" Jan 17 00:45:51.187999 containerd[1467]: time="2026-01-17T00:45:51.187858213Z" level=info msg="RemoveContainer for \"11b276455707613420b8a7c49e7aacbf263b7db1d6a20b4a480aae14ac43e92c\" returns successfully" Jan 17 00:45:51.188733 kubelet[1803]: I0117 00:45:51.188578 1803 scope.go:117] "RemoveContainer" containerID="ff07a1babcbdd8e9080b2c5affe510b51e7292a2b0d4232a142db2d7e3fc0d50" Jan 17 00:45:51.195266 containerd[1467]: time="2026-01-17T00:45:51.195224171Z" level=info msg="RemoveContainer for \"ff07a1babcbdd8e9080b2c5affe510b51e7292a2b0d4232a142db2d7e3fc0d50\"" Jan 17 00:45:51.210426 containerd[1467]: time="2026-01-17T00:45:51.210365430Z" level=info msg="RemoveContainer for \"ff07a1babcbdd8e9080b2c5affe510b51e7292a2b0d4232a142db2d7e3fc0d50\" returns successfully" Jan 17 00:45:51.212231 kubelet[1803]: I0117 00:45:51.211146 1803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01a1daf8-06fe-4b2b-872a-d498672270e2-clustermesh-secrets\") pod \"01a1daf8-06fe-4b2b-872a-d498672270e2\" (UID: \"01a1daf8-06fe-4b2b-872a-d498672270e2\") " Jan 17 00:45:51.212231 kubelet[1803]: I0117 00:45:51.211211 1803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7lh6\" (UniqueName: \"kubernetes.io/projected/01a1daf8-06fe-4b2b-872a-d498672270e2-kube-api-access-n7lh6\") pod \"01a1daf8-06fe-4b2b-872a-d498672270e2\" (UID: \"01a1daf8-06fe-4b2b-872a-d498672270e2\") " Jan 17 00:45:51.212231 kubelet[1803]: I0117 00:45:51.211259 1803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-cilium-run\") pod \"01a1daf8-06fe-4b2b-872a-d498672270e2\" (UID: \"01a1daf8-06fe-4b2b-872a-d498672270e2\") " Jan 17 00:45:51.212231 kubelet[1803]: I0117 00:45:51.211281 1803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-cni-path\") pod \"01a1daf8-06fe-4b2b-872a-d498672270e2\" (UID: \"01a1daf8-06fe-4b2b-872a-d498672270e2\") " Jan 17 00:45:51.212231 kubelet[1803]: I0117 00:45:51.211303 1803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-host-proc-sys-kernel\") pod \"01a1daf8-06fe-4b2b-872a-d498672270e2\" (UID: \"01a1daf8-06fe-4b2b-872a-d498672270e2\") " Jan 17 00:45:51.212231 kubelet[1803]: I0117 00:45:51.211328 1803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01a1daf8-06fe-4b2b-872a-d498672270e2-cilium-config-path\") pod \"01a1daf8-06fe-4b2b-872a-d498672270e2\" (UID: \"01a1daf8-06fe-4b2b-872a-d498672270e2\") " Jan 17 00:45:51.212517 kubelet[1803]: I0117 00:45:51.211352 1803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-etc-cni-netd\") pod \"01a1daf8-06fe-4b2b-872a-d498672270e2\" (UID: \"01a1daf8-06fe-4b2b-872a-d498672270e2\") " Jan 17 00:45:51.212517 kubelet[1803]: I0117 00:45:51.211370 1803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-host-proc-sys-net\") pod \"01a1daf8-06fe-4b2b-872a-d498672270e2\" (UID: \"01a1daf8-06fe-4b2b-872a-d498672270e2\") " Jan 17 00:45:51.212517 kubelet[1803]: I0117 00:45:51.211387 1803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-bpf-maps\") pod \"01a1daf8-06fe-4b2b-872a-d498672270e2\" (UID: \"01a1daf8-06fe-4b2b-872a-d498672270e2\") " Jan 17 00:45:51.212517 kubelet[1803]: I0117 00:45:51.211409 1803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-cilium-cgroup\") pod \"01a1daf8-06fe-4b2b-872a-d498672270e2\" (UID: \"01a1daf8-06fe-4b2b-872a-d498672270e2\") " Jan 17 00:45:51.212517 kubelet[1803]: I0117 00:45:51.211457 1803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-hostproc\") pod \"01a1daf8-06fe-4b2b-872a-d498672270e2\" (UID: \"01a1daf8-06fe-4b2b-872a-d498672270e2\") " Jan 17 00:45:51.212517 kubelet[1803]: I0117 00:45:51.211480 1803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01a1daf8-06fe-4b2b-872a-d498672270e2-hubble-tls\") pod \"01a1daf8-06fe-4b2b-872a-d498672270e2\" (UID: \"01a1daf8-06fe-4b2b-872a-d498672270e2\") " Jan 17 00:45:51.212749 kubelet[1803]: I0117 00:45:51.211503 1803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-lib-modules\") pod \"01a1daf8-06fe-4b2b-872a-d498672270e2\" (UID: \"01a1daf8-06fe-4b2b-872a-d498672270e2\") " Jan 17 00:45:51.212749 kubelet[1803]: I0117 00:45:51.211522 1803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-xtables-lock\") pod \"01a1daf8-06fe-4b2b-872a-d498672270e2\" (UID: \"01a1daf8-06fe-4b2b-872a-d498672270e2\") " Jan 17 00:45:51.212749 kubelet[1803]: I0117 00:45:51.211680 1803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "01a1daf8-06fe-4b2b-872a-d498672270e2" (UID: "01a1daf8-06fe-4b2b-872a-d498672270e2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:51.212749 kubelet[1803]: I0117 00:45:51.211778 1803 scope.go:117] "RemoveContainer" containerID="b571d8d54b95b62e526996aad53bc4c3cc646549f462d2f3d2a49eff6d315180" Jan 17 00:45:51.213729 kubelet[1803]: I0117 00:45:51.213668 1803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "01a1daf8-06fe-4b2b-872a-d498672270e2" (UID: "01a1daf8-06fe-4b2b-872a-d498672270e2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:51.215154 kubelet[1803]: I0117 00:45:51.214502 1803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "01a1daf8-06fe-4b2b-872a-d498672270e2" (UID: "01a1daf8-06fe-4b2b-872a-d498672270e2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:51.215154 kubelet[1803]: I0117 00:45:51.214738 1803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-cni-path" (OuterVolumeSpecName: "cni-path") pod "01a1daf8-06fe-4b2b-872a-d498672270e2" (UID: "01a1daf8-06fe-4b2b-872a-d498672270e2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:51.218023 kubelet[1803]: I0117 00:45:51.215578 1803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "01a1daf8-06fe-4b2b-872a-d498672270e2" (UID: "01a1daf8-06fe-4b2b-872a-d498672270e2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:51.218023 kubelet[1803]: I0117 00:45:51.216208 1803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "01a1daf8-06fe-4b2b-872a-d498672270e2" (UID: "01a1daf8-06fe-4b2b-872a-d498672270e2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:51.218023 kubelet[1803]: I0117 00:45:51.216301 1803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "01a1daf8-06fe-4b2b-872a-d498672270e2" (UID: "01a1daf8-06fe-4b2b-872a-d498672270e2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:51.218023 kubelet[1803]: I0117 00:45:51.216335 1803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "01a1daf8-06fe-4b2b-872a-d498672270e2" (UID: "01a1daf8-06fe-4b2b-872a-d498672270e2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:51.218023 kubelet[1803]: I0117 00:45:51.216365 1803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-hostproc" (OuterVolumeSpecName: "hostproc") pod "01a1daf8-06fe-4b2b-872a-d498672270e2" (UID: "01a1daf8-06fe-4b2b-872a-d498672270e2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:51.223643 containerd[1467]: time="2026-01-17T00:45:51.223013022Z" level=info msg="RemoveContainer for \"b571d8d54b95b62e526996aad53bc4c3cc646549f462d2f3d2a49eff6d315180\"" Jan 17 00:45:51.225395 kubelet[1803]: I0117 00:45:51.224390 1803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "01a1daf8-06fe-4b2b-872a-d498672270e2" (UID: "01a1daf8-06fe-4b2b-872a-d498672270e2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:51.233943 kubelet[1803]: I0117 00:45:51.233004 1803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01a1daf8-06fe-4b2b-872a-d498672270e2-kube-api-access-n7lh6" (OuterVolumeSpecName: "kube-api-access-n7lh6") pod "01a1daf8-06fe-4b2b-872a-d498672270e2" (UID: "01a1daf8-06fe-4b2b-872a-d498672270e2"). InnerVolumeSpecName "kube-api-access-n7lh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:45:51.233943 kubelet[1803]: I0117 00:45:51.233793 1803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01a1daf8-06fe-4b2b-872a-d498672270e2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "01a1daf8-06fe-4b2b-872a-d498672270e2" (UID: "01a1daf8-06fe-4b2b-872a-d498672270e2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:45:51.236239 kubelet[1803]: I0117 00:45:51.236210 1803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01a1daf8-06fe-4b2b-872a-d498672270e2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "01a1daf8-06fe-4b2b-872a-d498672270e2" (UID: "01a1daf8-06fe-4b2b-872a-d498672270e2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:45:51.240714 containerd[1467]: time="2026-01-17T00:45:51.240571248Z" level=info msg="RemoveContainer for \"b571d8d54b95b62e526996aad53bc4c3cc646549f462d2f3d2a49eff6d315180\" returns successfully" Jan 17 00:45:51.241369 kubelet[1803]: I0117 00:45:51.241239 1803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01a1daf8-06fe-4b2b-872a-d498672270e2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "01a1daf8-06fe-4b2b-872a-d498672270e2" (UID: "01a1daf8-06fe-4b2b-872a-d498672270e2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:45:51.241749 kubelet[1803]: I0117 00:45:51.241685 1803 scope.go:117] "RemoveContainer" containerID="01a7eaefe9fb1fcdc4707b1b3b21c1e42f6a986a3fbc77362a9e9510c31ce56b" Jan 17 00:45:51.244574 containerd[1467]: time="2026-01-17T00:45:51.244442454Z" level=error msg="ContainerStatus for \"01a7eaefe9fb1fcdc4707b1b3b21c1e42f6a986a3fbc77362a9e9510c31ce56b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"01a7eaefe9fb1fcdc4707b1b3b21c1e42f6a986a3fbc77362a9e9510c31ce56b\": not found" Jan 17 00:45:51.244868 kubelet[1803]: E0117 00:45:51.244766 1803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"01a7eaefe9fb1fcdc4707b1b3b21c1e42f6a986a3fbc77362a9e9510c31ce56b\": not found" containerID="01a7eaefe9fb1fcdc4707b1b3b21c1e42f6a986a3fbc77362a9e9510c31ce56b" Jan 17 00:45:51.245452 kubelet[1803]: I0117 00:45:51.244869 1803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"01a7eaefe9fb1fcdc4707b1b3b21c1e42f6a986a3fbc77362a9e9510c31ce56b"} err="failed to get container status \"01a7eaefe9fb1fcdc4707b1b3b21c1e42f6a986a3fbc77362a9e9510c31ce56b\": rpc error: code = NotFound desc = an error occurred when try to find container \"01a7eaefe9fb1fcdc4707b1b3b21c1e42f6a986a3fbc77362a9e9510c31ce56b\": not found" Jan 17 00:45:51.245452 kubelet[1803]: I0117 00:45:51.245086 1803 scope.go:117] "RemoveContainer" containerID="4369ff3a38b0882f90e570756c78fbd3affd49ce8a40f948395c07d741f16cc4" Jan 17 00:45:51.245652 containerd[1467]: time="2026-01-17T00:45:51.245317208Z" level=error msg="ContainerStatus for \"4369ff3a38b0882f90e570756c78fbd3affd49ce8a40f948395c07d741f16cc4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4369ff3a38b0882f90e570756c78fbd3affd49ce8a40f948395c07d741f16cc4\": not found" Jan 17 00:45:51.246774 kubelet[1803]: E0117 00:45:51.246553 1803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4369ff3a38b0882f90e570756c78fbd3affd49ce8a40f948395c07d741f16cc4\": not found" containerID="4369ff3a38b0882f90e570756c78fbd3affd49ce8a40f948395c07d741f16cc4" Jan 17 00:45:51.246774 kubelet[1803]: I0117 00:45:51.246587 1803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4369ff3a38b0882f90e570756c78fbd3affd49ce8a40f948395c07d741f16cc4"} err="failed to get container status \"4369ff3a38b0882f90e570756c78fbd3affd49ce8a40f948395c07d741f16cc4\": rpc error: code = NotFound desc = an error occurred when try to find container \"4369ff3a38b0882f90e570756c78fbd3affd49ce8a40f948395c07d741f16cc4\": not found" Jan 17 00:45:51.246774 kubelet[1803]: I0117 00:45:51.246610 1803 scope.go:117] "RemoveContainer" containerID="11b276455707613420b8a7c49e7aacbf263b7db1d6a20b4a480aae14ac43e92c" Jan 17 00:45:51.247572 containerd[1467]: time="2026-01-17T00:45:51.247010766Z" level=error msg="ContainerStatus for \"11b276455707613420b8a7c49e7aacbf263b7db1d6a20b4a480aae14ac43e92c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"11b276455707613420b8a7c49e7aacbf263b7db1d6a20b4a480aae14ac43e92c\": not found" Jan 17 00:45:51.247640 kubelet[1803]: E0117 00:45:51.247216 1803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"11b276455707613420b8a7c49e7aacbf263b7db1d6a20b4a480aae14ac43e92c\": not found" containerID="11b276455707613420b8a7c49e7aacbf263b7db1d6a20b4a480aae14ac43e92c" Jan 17 00:45:51.247640 kubelet[1803]: I0117 00:45:51.247246 1803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"11b276455707613420b8a7c49e7aacbf263b7db1d6a20b4a480aae14ac43e92c"} err="failed to get container status \"11b276455707613420b8a7c49e7aacbf263b7db1d6a20b4a480aae14ac43e92c\": rpc error: code = NotFound desc = an error occurred when try to find container \"11b276455707613420b8a7c49e7aacbf263b7db1d6a20b4a480aae14ac43e92c\": not found" Jan 17 00:45:51.247640 kubelet[1803]: I0117 00:45:51.247267 1803 scope.go:117] "RemoveContainer" containerID="ff07a1babcbdd8e9080b2c5affe510b51e7292a2b0d4232a142db2d7e3fc0d50" Jan 17 00:45:51.249445 containerd[1467]: time="2026-01-17T00:45:51.249001502Z" level=error msg="ContainerStatus for \"ff07a1babcbdd8e9080b2c5affe510b51e7292a2b0d4232a142db2d7e3fc0d50\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ff07a1babcbdd8e9080b2c5affe510b51e7292a2b0d4232a142db2d7e3fc0d50\": not found" Jan 17 00:45:51.249519 kubelet[1803]: E0117 00:45:51.249130 1803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ff07a1babcbdd8e9080b2c5affe510b51e7292a2b0d4232a142db2d7e3fc0d50\": not found" containerID="ff07a1babcbdd8e9080b2c5affe510b51e7292a2b0d4232a142db2d7e3fc0d50" Jan 17 00:45:51.249519 kubelet[1803]: I0117 00:45:51.249156 1803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ff07a1babcbdd8e9080b2c5affe510b51e7292a2b0d4232a142db2d7e3fc0d50"} err="failed to get container status \"ff07a1babcbdd8e9080b2c5affe510b51e7292a2b0d4232a142db2d7e3fc0d50\": rpc error: code = NotFound desc = an error occurred when try to find container \"ff07a1babcbdd8e9080b2c5affe510b51e7292a2b0d4232a142db2d7e3fc0d50\": not found" Jan 17 00:45:51.249519 kubelet[1803]: I0117 00:45:51.249175 1803 scope.go:117] "RemoveContainer" containerID="b571d8d54b95b62e526996aad53bc4c3cc646549f462d2f3d2a49eff6d315180" Jan 17 00:45:51.251694 containerd[1467]: time="2026-01-17T00:45:51.251198662Z" level=error msg="ContainerStatus for \"b571d8d54b95b62e526996aad53bc4c3cc646549f462d2f3d2a49eff6d315180\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b571d8d54b95b62e526996aad53bc4c3cc646549f462d2f3d2a49eff6d315180\": not found" Jan 17 00:45:51.252142 kubelet[1803]: E0117 00:45:51.252028 1803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b571d8d54b95b62e526996aad53bc4c3cc646549f462d2f3d2a49eff6d315180\": not found" containerID="b571d8d54b95b62e526996aad53bc4c3cc646549f462d2f3d2a49eff6d315180" Jan 17 00:45:51.252142 kubelet[1803]: I0117 00:45:51.252091 1803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b571d8d54b95b62e526996aad53bc4c3cc646549f462d2f3d2a49eff6d315180"} err="failed to get container status \"b571d8d54b95b62e526996aad53bc4c3cc646549f462d2f3d2a49eff6d315180\": rpc error: code = NotFound desc = an error occurred when try to find container \"b571d8d54b95b62e526996aad53bc4c3cc646549f462d2f3d2a49eff6d315180\": not found" Jan 17 00:45:51.312559 kubelet[1803]: I0117 00:45:51.312272 1803 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-host-proc-sys-net\") on node \"10.0.0.120\" DevicePath \"\"" Jan 17 00:45:51.312559 kubelet[1803]: I0117 00:45:51.312333 1803 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-bpf-maps\") on node \"10.0.0.120\" DevicePath \"\"" Jan 17 00:45:51.312559 kubelet[1803]: I0117 00:45:51.312344 1803 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-cilium-cgroup\") on node \"10.0.0.120\" DevicePath \"\"" Jan 17 00:45:51.312559 kubelet[1803]: I0117 00:45:51.312353 1803 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-hostproc\") on node \"10.0.0.120\" DevicePath \"\"" Jan 17 00:45:51.312559 kubelet[1803]: I0117 00:45:51.312361 1803 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01a1daf8-06fe-4b2b-872a-d498672270e2-hubble-tls\") on node \"10.0.0.120\" DevicePath \"\"" Jan 17 00:45:51.312559 kubelet[1803]: I0117 00:45:51.312369 1803 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-lib-modules\") on node \"10.0.0.120\" DevicePath \"\"" Jan 17 00:45:51.312559 kubelet[1803]: I0117 00:45:51.312377 1803 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-xtables-lock\") on node \"10.0.0.120\" DevicePath \"\"" Jan 17 00:45:51.312559 kubelet[1803]: I0117 00:45:51.312387 1803 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01a1daf8-06fe-4b2b-872a-d498672270e2-clustermesh-secrets\") on node \"10.0.0.120\" DevicePath \"\"" Jan 17 00:45:51.313256 kubelet[1803]: I0117 00:45:51.312396 1803 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n7lh6\" (UniqueName: \"kubernetes.io/projected/01a1daf8-06fe-4b2b-872a-d498672270e2-kube-api-access-n7lh6\") on node \"10.0.0.120\" DevicePath \"\"" Jan 17 00:45:51.313256 kubelet[1803]: I0117 00:45:51.312405 1803 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-cilium-run\") on node \"10.0.0.120\" DevicePath \"\"" Jan 17 00:45:51.313256 kubelet[1803]: I0117 00:45:51.312412 1803 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-cni-path\") on node \"10.0.0.120\" DevicePath \"\"" Jan 17 00:45:51.313256 kubelet[1803]: I0117 00:45:51.312420 1803 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-host-proc-sys-kernel\") on node \"10.0.0.120\" DevicePath \"\"" Jan 17 00:45:51.313256 kubelet[1803]: I0117 00:45:51.312427 1803 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01a1daf8-06fe-4b2b-872a-d498672270e2-cilium-config-path\") on node \"10.0.0.120\" DevicePath \"\"" Jan 17 00:45:51.313256 kubelet[1803]: I0117 00:45:51.312437 1803 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01a1daf8-06fe-4b2b-872a-d498672270e2-etc-cni-netd\") on node \"10.0.0.120\" DevicePath \"\"" Jan 17 00:45:51.431384 systemd[1]: var-lib-kubelet-pods-01a1daf8\x2d06fe\x2d4b2b\x2d872a\x2dd498672270e2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn7lh6.mount: Deactivated successfully. Jan 17 00:45:51.431552 systemd[1]: var-lib-kubelet-pods-01a1daf8\x2d06fe\x2d4b2b\x2d872a\x2dd498672270e2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 00:45:51.431663 systemd[1]: var-lib-kubelet-pods-01a1daf8\x2d06fe\x2d4b2b\x2d872a\x2dd498672270e2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 00:45:51.466433 kubelet[1803]: E0117 00:45:51.466277 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:51.487007 systemd[1]: Removed slice kubepods-burstable-pod01a1daf8_06fe_4b2b_872a_d498672270e2.slice - libcontainer container kubepods-burstable-pod01a1daf8_06fe_4b2b_872a_d498672270e2.slice. Jan 17 00:45:51.487163 systemd[1]: kubepods-burstable-pod01a1daf8_06fe_4b2b_872a_d498672270e2.slice: Consumed 23.604s CPU time. Jan 17 00:45:51.609309 kubelet[1803]: E0117 00:45:51.607700 1803 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 00:45:52.476756 kubelet[1803]: E0117 00:45:52.476443 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:52.856373 kubelet[1803]: I0117 00:45:52.855728 1803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01a1daf8-06fe-4b2b-872a-d498672270e2" path="/var/lib/kubelet/pods/01a1daf8-06fe-4b2b-872a-d498672270e2/volumes" Jan 17 00:45:53.479644 kubelet[1803]: E0117 00:45:53.478698 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:53.927201 systemd[1]: Created slice kubepods-burstable-pod1f0dfd5f_9580_425c_82d9_864a16e9f2f0.slice - libcontainer container kubepods-burstable-pod1f0dfd5f_9580_425c_82d9_864a16e9f2f0.slice. Jan 17 00:45:53.941229 kubelet[1803]: I0117 00:45:53.941130 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1f0dfd5f-9580-425c-82d9-864a16e9f2f0-cilium-cgroup\") pod \"cilium-cw4sg\" (UID: \"1f0dfd5f-9580-425c-82d9-864a16e9f2f0\") " pod="kube-system/cilium-cw4sg" Jan 17 00:45:53.941229 kubelet[1803]: I0117 00:45:53.941183 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1f0dfd5f-9580-425c-82d9-864a16e9f2f0-cilium-ipsec-secrets\") pod \"cilium-cw4sg\" (UID: \"1f0dfd5f-9580-425c-82d9-864a16e9f2f0\") " pod="kube-system/cilium-cw4sg" Jan 17 00:45:53.941229 kubelet[1803]: I0117 00:45:53.941218 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1f0dfd5f-9580-425c-82d9-864a16e9f2f0-hubble-tls\") pod \"cilium-cw4sg\" (UID: \"1f0dfd5f-9580-425c-82d9-864a16e9f2f0\") " pod="kube-system/cilium-cw4sg" Jan 17 00:45:53.941229 kubelet[1803]: I0117 00:45:53.941242 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wvmw\" (UniqueName: \"kubernetes.io/projected/1f0dfd5f-9580-425c-82d9-864a16e9f2f0-kube-api-access-6wvmw\") pod \"cilium-cw4sg\" (UID: \"1f0dfd5f-9580-425c-82d9-864a16e9f2f0\") " pod="kube-system/cilium-cw4sg" Jan 17 00:45:53.941229 kubelet[1803]: I0117 00:45:53.941265 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1f0dfd5f-9580-425c-82d9-864a16e9f2f0-cilium-run\") pod \"cilium-cw4sg\" (UID: \"1f0dfd5f-9580-425c-82d9-864a16e9f2f0\") " pod="kube-system/cilium-cw4sg" Jan 17 00:45:53.941229 kubelet[1803]: I0117 00:45:53.941284 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1f0dfd5f-9580-425c-82d9-864a16e9f2f0-etc-cni-netd\") pod \"cilium-cw4sg\" (UID: \"1f0dfd5f-9580-425c-82d9-864a16e9f2f0\") " pod="kube-system/cilium-cw4sg" Jan 17 00:45:53.941799 kubelet[1803]: I0117 00:45:53.941305 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1f0dfd5f-9580-425c-82d9-864a16e9f2f0-host-proc-sys-kernel\") pod \"cilium-cw4sg\" (UID: \"1f0dfd5f-9580-425c-82d9-864a16e9f2f0\") " pod="kube-system/cilium-cw4sg" Jan 17 00:45:53.941799 kubelet[1803]: I0117 00:45:53.941330 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1f0dfd5f-9580-425c-82d9-864a16e9f2f0-bpf-maps\") pod \"cilium-cw4sg\" (UID: \"1f0dfd5f-9580-425c-82d9-864a16e9f2f0\") " pod="kube-system/cilium-cw4sg" Jan 17 00:45:53.941799 kubelet[1803]: I0117 00:45:53.941353 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1f0dfd5f-9580-425c-82d9-864a16e9f2f0-hostproc\") pod \"cilium-cw4sg\" (UID: \"1f0dfd5f-9580-425c-82d9-864a16e9f2f0\") " pod="kube-system/cilium-cw4sg" Jan 17 00:45:53.941799 kubelet[1803]: I0117 00:45:53.941437 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f0dfd5f-9580-425c-82d9-864a16e9f2f0-lib-modules\") pod \"cilium-cw4sg\" (UID: \"1f0dfd5f-9580-425c-82d9-864a16e9f2f0\") " pod="kube-system/cilium-cw4sg" Jan 17 00:45:53.941799 kubelet[1803]: I0117 00:45:53.941456 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f0dfd5f-9580-425c-82d9-864a16e9f2f0-xtables-lock\") pod \"cilium-cw4sg\" (UID: \"1f0dfd5f-9580-425c-82d9-864a16e9f2f0\") " pod="kube-system/cilium-cw4sg" Jan 17 00:45:53.941799 kubelet[1803]: I0117 00:45:53.941478 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1f0dfd5f-9580-425c-82d9-864a16e9f2f0-clustermesh-secrets\") pod \"cilium-cw4sg\" (UID: \"1f0dfd5f-9580-425c-82d9-864a16e9f2f0\") " pod="kube-system/cilium-cw4sg" Jan 17 00:45:53.942095 kubelet[1803]: I0117 00:45:53.941498 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1f0dfd5f-9580-425c-82d9-864a16e9f2f0-cni-path\") pod \"cilium-cw4sg\" (UID: \"1f0dfd5f-9580-425c-82d9-864a16e9f2f0\") " pod="kube-system/cilium-cw4sg" Jan 17 00:45:53.942095 kubelet[1803]: I0117 00:45:53.941519 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1f0dfd5f-9580-425c-82d9-864a16e9f2f0-cilium-config-path\") pod \"cilium-cw4sg\" (UID: \"1f0dfd5f-9580-425c-82d9-864a16e9f2f0\") " pod="kube-system/cilium-cw4sg" Jan 17 00:45:53.942095 kubelet[1803]: I0117 00:45:53.941539 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1f0dfd5f-9580-425c-82d9-864a16e9f2f0-host-proc-sys-net\") pod \"cilium-cw4sg\" (UID: \"1f0dfd5f-9580-425c-82d9-864a16e9f2f0\") " pod="kube-system/cilium-cw4sg" Jan 17 00:45:53.997108 systemd[1]: Created slice kubepods-besteffort-pod236d4118_e10b_4c51_ab21_134b1bf215b2.slice - libcontainer container kubepods-besteffort-pod236d4118_e10b_4c51_ab21_134b1bf215b2.slice. Jan 17 00:45:54.046994 kubelet[1803]: I0117 00:45:54.046126 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/236d4118-e10b-4c51-ab21-134b1bf215b2-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-jmkb7\" (UID: \"236d4118-e10b-4c51-ab21-134b1bf215b2\") " pod="kube-system/cilium-operator-6c4d7847fc-jmkb7" Jan 17 00:45:54.046994 kubelet[1803]: I0117 00:45:54.046177 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zhlk\" (UniqueName: \"kubernetes.io/projected/236d4118-e10b-4c51-ab21-134b1bf215b2-kube-api-access-2zhlk\") pod \"cilium-operator-6c4d7847fc-jmkb7\" (UID: \"236d4118-e10b-4c51-ab21-134b1bf215b2\") " pod="kube-system/cilium-operator-6c4d7847fc-jmkb7" Jan 17 00:45:54.273987 kubelet[1803]: E0117 00:45:54.271942 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:54.274110 containerd[1467]: time="2026-01-17T00:45:54.273103965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cw4sg,Uid:1f0dfd5f-9580-425c-82d9-864a16e9f2f0,Namespace:kube-system,Attempt:0,}" Jan 17 00:45:54.317111 kubelet[1803]: E0117 00:45:54.316408 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:54.317249 containerd[1467]: time="2026-01-17T00:45:54.317137436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jmkb7,Uid:236d4118-e10b-4c51-ab21-134b1bf215b2,Namespace:kube-system,Attempt:0,}" Jan 17 00:45:54.388933 containerd[1467]: time="2026-01-17T00:45:54.387979678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:45:54.388933 containerd[1467]: time="2026-01-17T00:45:54.388614387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:45:54.388933 containerd[1467]: time="2026-01-17T00:45:54.388637570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:45:54.388933 containerd[1467]: time="2026-01-17T00:45:54.388761120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:45:54.405560 containerd[1467]: time="2026-01-17T00:45:54.405080348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:45:54.405560 containerd[1467]: time="2026-01-17T00:45:54.405163101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:45:54.405560 containerd[1467]: time="2026-01-17T00:45:54.405187517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:45:54.405560 containerd[1467]: time="2026-01-17T00:45:54.405321757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:45:54.462002 systemd[1]: Started cri-containerd-ac7e55d9e61d6a9ba7a99d0ced32c6d9b8caae7f7229a2f3c937305b170e05d5.scope - libcontainer container ac7e55d9e61d6a9ba7a99d0ced32c6d9b8caae7f7229a2f3c937305b170e05d5. Jan 17 00:45:54.473865 systemd[1]: Started cri-containerd-ae1d2b29739a48f60032ad579bd277d98f9c6e8aea272d96149492df5128b6c3.scope - libcontainer container ae1d2b29739a48f60032ad579bd277d98f9c6e8aea272d96149492df5128b6c3. Jan 17 00:45:54.484356 kubelet[1803]: E0117 00:45:54.483227 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:54.556618 containerd[1467]: time="2026-01-17T00:45:54.556465585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cw4sg,Uid:1f0dfd5f-9580-425c-82d9-864a16e9f2f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac7e55d9e61d6a9ba7a99d0ced32c6d9b8caae7f7229a2f3c937305b170e05d5\"" Jan 17 00:45:54.563667 kubelet[1803]: E0117 00:45:54.562699 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:54.590467 containerd[1467]: time="2026-01-17T00:45:54.586299962Z" level=info msg="CreateContainer within sandbox \"ac7e55d9e61d6a9ba7a99d0ced32c6d9b8caae7f7229a2f3c937305b170e05d5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:45:54.623412 containerd[1467]: time="2026-01-17T00:45:54.623260359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jmkb7,Uid:236d4118-e10b-4c51-ab21-134b1bf215b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae1d2b29739a48f60032ad579bd277d98f9c6e8aea272d96149492df5128b6c3\"" Jan 17 00:45:54.631550 kubelet[1803]: E0117 00:45:54.629190 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:54.631760 containerd[1467]: time="2026-01-17T00:45:54.631669083Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 00:45:54.649980 containerd[1467]: time="2026-01-17T00:45:54.646711897Z" level=info msg="CreateContainer within sandbox \"ac7e55d9e61d6a9ba7a99d0ced32c6d9b8caae7f7229a2f3c937305b170e05d5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4cacd3a14911560ae45f58f30c6054ac0d41357cb8ea1f63a64e0f87cc2a4b06\"" Jan 17 00:45:54.649980 containerd[1467]: time="2026-01-17T00:45:54.647685165Z" level=info msg="StartContainer for \"4cacd3a14911560ae45f58f30c6054ac0d41357cb8ea1f63a64e0f87cc2a4b06\"" Jan 17 00:45:54.749188 systemd[1]: Started cri-containerd-4cacd3a14911560ae45f58f30c6054ac0d41357cb8ea1f63a64e0f87cc2a4b06.scope - libcontainer container 4cacd3a14911560ae45f58f30c6054ac0d41357cb8ea1f63a64e0f87cc2a4b06. Jan 17 00:45:54.879322 containerd[1467]: time="2026-01-17T00:45:54.878302586Z" level=info msg="StartContainer for \"4cacd3a14911560ae45f58f30c6054ac0d41357cb8ea1f63a64e0f87cc2a4b06\" returns successfully" Jan 17 00:45:54.949025 systemd[1]: cri-containerd-4cacd3a14911560ae45f58f30c6054ac0d41357cb8ea1f63a64e0f87cc2a4b06.scope: Deactivated successfully. Jan 17 00:45:55.089109 containerd[1467]: time="2026-01-17T00:45:55.088957433Z" level=info msg="shim disconnected" id=4cacd3a14911560ae45f58f30c6054ac0d41357cb8ea1f63a64e0f87cc2a4b06 namespace=k8s.io Jan 17 00:45:55.089109 containerd[1467]: time="2026-01-17T00:45:55.089051909Z" level=warning msg="cleaning up after shim disconnected" id=4cacd3a14911560ae45f58f30c6054ac0d41357cb8ea1f63a64e0f87cc2a4b06 namespace=k8s.io Jan 17 00:45:55.089109 containerd[1467]: time="2026-01-17T00:45:55.089070944Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:45:55.195752 kubelet[1803]: E0117 00:45:55.195064 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:55.212731 containerd[1467]: time="2026-01-17T00:45:55.212650188Z" level=info msg="CreateContainer within sandbox \"ac7e55d9e61d6a9ba7a99d0ced32c6d9b8caae7f7229a2f3c937305b170e05d5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:45:55.254789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3704668193.mount: Deactivated successfully. Jan 17 00:45:55.274733 containerd[1467]: time="2026-01-17T00:45:55.274455581Z" level=info msg="CreateContainer within sandbox \"ac7e55d9e61d6a9ba7a99d0ced32c6d9b8caae7f7229a2f3c937305b170e05d5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d26fa0845913e1ab81678c6a5a9bf86037bb9c0f3e289a55b0dfc126bd0b6e62\"" Jan 17 00:45:55.280527 containerd[1467]: time="2026-01-17T00:45:55.277542260Z" level=info msg="StartContainer for \"d26fa0845913e1ab81678c6a5a9bf86037bb9c0f3e289a55b0dfc126bd0b6e62\"" Jan 17 00:45:55.378057 systemd[1]: Started cri-containerd-d26fa0845913e1ab81678c6a5a9bf86037bb9c0f3e289a55b0dfc126bd0b6e62.scope - libcontainer container d26fa0845913e1ab81678c6a5a9bf86037bb9c0f3e289a55b0dfc126bd0b6e62. Jan 17 00:45:55.463419 containerd[1467]: time="2026-01-17T00:45:55.462958201Z" level=info msg="StartContainer for \"d26fa0845913e1ab81678c6a5a9bf86037bb9c0f3e289a55b0dfc126bd0b6e62\" returns successfully" Jan 17 00:45:55.484396 kubelet[1803]: E0117 00:45:55.484048 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:55.501361 systemd[1]: cri-containerd-d26fa0845913e1ab81678c6a5a9bf86037bb9c0f3e289a55b0dfc126bd0b6e62.scope: Deactivated successfully. Jan 17 00:45:55.596013 containerd[1467]: time="2026-01-17T00:45:55.591667845Z" level=info msg="shim disconnected" id=d26fa0845913e1ab81678c6a5a9bf86037bb9c0f3e289a55b0dfc126bd0b6e62 namespace=k8s.io Jan 17 00:45:55.596013 containerd[1467]: time="2026-01-17T00:45:55.591728889Z" level=warning msg="cleaning up after shim disconnected" id=d26fa0845913e1ab81678c6a5a9bf86037bb9c0f3e289a55b0dfc126bd0b6e62 namespace=k8s.io Jan 17 00:45:55.596013 containerd[1467]: time="2026-01-17T00:45:55.591743166Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:45:56.071737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d26fa0845913e1ab81678c6a5a9bf86037bb9c0f3e289a55b0dfc126bd0b6e62-rootfs.mount: Deactivated successfully. Jan 17 00:45:56.266217 kubelet[1803]: E0117 00:45:56.251366 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:56.342091 containerd[1467]: time="2026-01-17T00:45:56.341483786Z" level=info msg="CreateContainer within sandbox \"ac7e55d9e61d6a9ba7a99d0ced32c6d9b8caae7f7229a2f3c937305b170e05d5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:45:56.380888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4028973181.mount: Deactivated successfully. Jan 17 00:45:56.491892 kubelet[1803]: E0117 00:45:56.486425 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:56.610964 kubelet[1803]: E0117 00:45:56.610904 1803 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 00:45:56.653313 containerd[1467]: time="2026-01-17T00:45:56.653196272Z" level=info msg="CreateContainer within sandbox \"ac7e55d9e61d6a9ba7a99d0ced32c6d9b8caae7f7229a2f3c937305b170e05d5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8052d62f8ff441a8917413045321901bd78b17d730d215ce42a5a9ad20ddb7e3\"" Jan 17 00:45:56.657052 containerd[1467]: time="2026-01-17T00:45:56.654056901Z" level=info msg="StartContainer for \"8052d62f8ff441a8917413045321901bd78b17d730d215ce42a5a9ad20ddb7e3\"" Jan 17 00:45:56.893539 systemd[1]: Started cri-containerd-8052d62f8ff441a8917413045321901bd78b17d730d215ce42a5a9ad20ddb7e3.scope - libcontainer container 8052d62f8ff441a8917413045321901bd78b17d730d215ce42a5a9ad20ddb7e3. Jan 17 00:45:57.151991 systemd[1]: cri-containerd-8052d62f8ff441a8917413045321901bd78b17d730d215ce42a5a9ad20ddb7e3.scope: Deactivated successfully. Jan 17 00:45:57.165347 containerd[1467]: time="2026-01-17T00:45:57.162615074Z" level=info msg="StartContainer for \"8052d62f8ff441a8917413045321901bd78b17d730d215ce42a5a9ad20ddb7e3\" returns successfully" Jan 17 00:45:57.255966 kubelet[1803]: E0117 00:45:57.255261 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:57.262770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8052d62f8ff441a8917413045321901bd78b17d730d215ce42a5a9ad20ddb7e3-rootfs.mount: Deactivated successfully. Jan 17 00:45:57.386638 containerd[1467]: time="2026-01-17T00:45:57.385345952Z" level=info msg="shim disconnected" id=8052d62f8ff441a8917413045321901bd78b17d730d215ce42a5a9ad20ddb7e3 namespace=k8s.io Jan 17 00:45:57.386638 containerd[1467]: time="2026-01-17T00:45:57.385413527Z" level=warning msg="cleaning up after shim disconnected" id=8052d62f8ff441a8917413045321901bd78b17d730d215ce42a5a9ad20ddb7e3 namespace=k8s.io Jan 17 00:45:57.386638 containerd[1467]: time="2026-01-17T00:45:57.385426111Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:45:57.511554 kubelet[1803]: E0117 00:45:57.510992 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:58.305918 kubelet[1803]: E0117 00:45:58.304709 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:58.326091 containerd[1467]: time="2026-01-17T00:45:58.324173846Z" level=info msg="CreateContainer within sandbox \"ac7e55d9e61d6a9ba7a99d0ced32c6d9b8caae7f7229a2f3c937305b170e05d5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:45:58.410779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount782319334.mount: Deactivated successfully. Jan 17 00:45:58.511118 containerd[1467]: time="2026-01-17T00:45:58.510382294Z" level=info msg="CreateContainer within sandbox \"ac7e55d9e61d6a9ba7a99d0ced32c6d9b8caae7f7229a2f3c937305b170e05d5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3d60c118f6c05eb87aad84ff50f2999788dbaa814cbc0591ed4d1f273dcbf446\"" Jan 17 00:45:58.519304 kubelet[1803]: E0117 00:45:58.516956 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:58.519779 containerd[1467]: time="2026-01-17T00:45:58.517506168Z" level=info msg="StartContainer for \"3d60c118f6c05eb87aad84ff50f2999788dbaa814cbc0591ed4d1f273dcbf446\"" Jan 17 00:45:58.690112 kubelet[1803]: I0117 00:45:58.688702 1803 setters.go:618] "Node became not ready" node="10.0.0.120" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-17T00:45:58Z","lastTransitionTime":"2026-01-17T00:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 17 00:45:58.738284 systemd[1]: Started cri-containerd-3d60c118f6c05eb87aad84ff50f2999788dbaa814cbc0591ed4d1f273dcbf446.scope - libcontainer container 3d60c118f6c05eb87aad84ff50f2999788dbaa814cbc0591ed4d1f273dcbf446. Jan 17 00:45:58.904009 systemd[1]: cri-containerd-3d60c118f6c05eb87aad84ff50f2999788dbaa814cbc0591ed4d1f273dcbf446.scope: Deactivated successfully. Jan 17 00:45:58.939130 containerd[1467]: time="2026-01-17T00:45:58.938335542Z" level=info msg="StartContainer for \"3d60c118f6c05eb87aad84ff50f2999788dbaa814cbc0591ed4d1f273dcbf446\" returns successfully" Jan 17 00:45:59.140001 containerd[1467]: time="2026-01-17T00:45:59.137269510Z" level=info msg="shim disconnected" id=3d60c118f6c05eb87aad84ff50f2999788dbaa814cbc0591ed4d1f273dcbf446 namespace=k8s.io Jan 17 00:45:59.140001 containerd[1467]: time="2026-01-17T00:45:59.137458251Z" level=warning msg="cleaning up after shim disconnected" id=3d60c118f6c05eb87aad84ff50f2999788dbaa814cbc0591ed4d1f273dcbf446 namespace=k8s.io Jan 17 00:45:59.140001 containerd[1467]: time="2026-01-17T00:45:59.137479691Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:45:59.327972 kubelet[1803]: E0117 00:45:59.324798 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:59.401730 containerd[1467]: time="2026-01-17T00:45:59.401439664Z" level=info msg="CreateContainer within sandbox \"ac7e55d9e61d6a9ba7a99d0ced32c6d9b8caae7f7229a2f3c937305b170e05d5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:45:59.402341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d60c118f6c05eb87aad84ff50f2999788dbaa814cbc0591ed4d1f273dcbf446-rootfs.mount: Deactivated successfully. Jan 17 00:45:59.522970 kubelet[1803]: E0117 00:45:59.518628 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:45:59.522415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1165236899.mount: Deactivated successfully. Jan 17 00:45:59.625148 containerd[1467]: time="2026-01-17T00:45:59.624925979Z" level=info msg="CreateContainer within sandbox \"ac7e55d9e61d6a9ba7a99d0ced32c6d9b8caae7f7229a2f3c937305b170e05d5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"977c4a624e97e22844b96f5c713bcd23b54a70ff6c7bbe5e03363189aa8e0c39\"" Jan 17 00:45:59.628346 containerd[1467]: time="2026-01-17T00:45:59.626907343Z" level=info msg="StartContainer for \"977c4a624e97e22844b96f5c713bcd23b54a70ff6c7bbe5e03363189aa8e0c39\"" Jan 17 00:45:59.794932 systemd[1]: Started cri-containerd-977c4a624e97e22844b96f5c713bcd23b54a70ff6c7bbe5e03363189aa8e0c39.scope - libcontainer container 977c4a624e97e22844b96f5c713bcd23b54a70ff6c7bbe5e03363189aa8e0c39. Jan 17 00:45:59.979012 containerd[1467]: time="2026-01-17T00:45:59.976149536Z" level=info msg="StartContainer for \"977c4a624e97e22844b96f5c713bcd23b54a70ff6c7bbe5e03363189aa8e0c39\" returns successfully" Jan 17 00:46:00.100024 containerd[1467]: time="2026-01-17T00:46:00.099530768Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:46:00.111146 containerd[1467]: time="2026-01-17T00:46:00.107190103Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 17 00:46:00.117559 containerd[1467]: time="2026-01-17T00:46:00.117350185Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:46:00.126003 containerd[1467]: time="2026-01-17T00:46:00.124606930Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.492886782s" Jan 17 00:46:00.126003 containerd[1467]: time="2026-01-17T00:46:00.124666720Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 17 00:46:00.170339 containerd[1467]: time="2026-01-17T00:46:00.169726506Z" level=info msg="CreateContainer within sandbox \"ae1d2b29739a48f60032ad579bd277d98f9c6e8aea272d96149492df5128b6c3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 00:46:00.234663 containerd[1467]: time="2026-01-17T00:46:00.233108250Z" level=info msg="CreateContainer within sandbox \"ae1d2b29739a48f60032ad579bd277d98f9c6e8aea272d96149492df5128b6c3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d6572230f4a84e3dd3c74087056e8379543c2ca8f2f29b1d9f49b3ad4789a66d\"" Jan 17 00:46:00.259086 containerd[1467]: time="2026-01-17T00:46:00.251094729Z" level=info msg="StartContainer for \"d6572230f4a84e3dd3c74087056e8379543c2ca8f2f29b1d9f49b3ad4789a66d\"" Jan 17 00:46:00.510767 systemd[1]: Started cri-containerd-d6572230f4a84e3dd3c74087056e8379543c2ca8f2f29b1d9f49b3ad4789a66d.scope - libcontainer container d6572230f4a84e3dd3c74087056e8379543c2ca8f2f29b1d9f49b3ad4789a66d. Jan 17 00:46:00.531111 kubelet[1803]: E0117 00:46:00.529953 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:46:00.654998 containerd[1467]: time="2026-01-17T00:46:00.653655629Z" level=info msg="StartContainer for \"d6572230f4a84e3dd3c74087056e8379543c2ca8f2f29b1d9f49b3ad4789a66d\" returns successfully" Jan 17 00:46:01.414279 kubelet[1803]: E0117 00:46:01.412798 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:46:01.414279 kubelet[1803]: E0117 00:46:01.413731 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:46:01.532986 kubelet[1803]: E0117 00:46:01.530967 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:46:01.809244 kubelet[1803]: I0117 00:46:01.806239 1803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-jmkb7" podStartSLOduration=3.3067834720000002 podStartE2EDuration="8.80621224s" podCreationTimestamp="2026-01-17 00:45:53 +0000 UTC" firstStartedPulling="2026-01-17 00:45:54.630542587 +0000 UTC m=+207.929847473" lastFinishedPulling="2026-01-17 00:46:00.129971355 +0000 UTC m=+213.429276241" observedRunningTime="2026-01-17 00:46:01.600906861 +0000 UTC m=+214.900211747" watchObservedRunningTime="2026-01-17 00:46:01.80621224 +0000 UTC m=+215.105517146" Jan 17 00:46:01.855954 kubelet[1803]: E0117 00:46:01.854997 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:46:02.062538 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 17 00:46:02.438359 kubelet[1803]: E0117 00:46:02.435785 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:46:02.444194 kubelet[1803]: E0117 00:46:02.440442 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:46:02.543300 kubelet[1803]: E0117 00:46:02.543201 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:46:03.545588 kubelet[1803]: E0117 00:46:03.544614 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:46:04.550534 kubelet[1803]: E0117 00:46:04.546935 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:46:05.550086 kubelet[1803]: E0117 00:46:05.550006 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:46:06.558064 kubelet[1803]: E0117 00:46:06.557603 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:46:07.579182 kubelet[1803]: E0117 00:46:07.579115 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:46:08.141593 kubelet[1803]: E0117 00:46:08.141381 1803 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:46:08.580731 kubelet[1803]: E0117 00:46:08.579559 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:46:09.581011 kubelet[1803]: E0117 00:46:09.580736 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:46:10.291779 kubelet[1803]: E0117 00:46:10.291653 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:46:10.388580 systemd-networkd[1390]: lxc_health: Link UP Jan 17 00:46:10.501900 kubelet[1803]: E0117 00:46:10.501161 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:46:10.519087 systemd-networkd[1390]: lxc_health: Gained carrier Jan 17 00:46:10.543169 kubelet[1803]: I0117 00:46:10.540298 1803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cw4sg" podStartSLOduration=17.540270208 podStartE2EDuration="17.540270208s" podCreationTimestamp="2026-01-17 00:45:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:46:01.818409615 +0000 UTC m=+215.117714522" watchObservedRunningTime="2026-01-17 00:46:10.540270208 +0000 UTC m=+223.839575093" Jan 17 00:46:10.590890 kubelet[1803]: E0117 00:46:10.587409 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:46:11.508558 kubelet[1803]: E0117 00:46:11.507999 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:46:11.609466 kubelet[1803]: E0117 00:46:11.609117 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:46:11.741915 systemd-networkd[1390]: lxc_health: Gained IPv6LL Jan 17 00:46:12.616093 kubelet[1803]: E0117 00:46:12.612681 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:46:13.613329 kubelet[1803]: E0117 00:46:13.613248 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:46:14.620606 kubelet[1803]: E0117 00:46:14.620485 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:46:15.638553 kubelet[1803]: E0117 00:46:15.630032 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:46:16.642175 kubelet[1803]: E0117 00:46:16.642122 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:46:17.653262 kubelet[1803]: E0117 00:46:17.653076 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:46:18.654533 kubelet[1803]: E0117 00:46:18.654297 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"