Jan 24 00:59:32.193345 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:59:32.193366 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:59:32.193378 kernel: BIOS-provided physical RAM map: Jan 24 00:59:32.193383 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 24 00:59:32.193389 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 24 00:59:32.193394 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 24 00:59:32.193400 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 24 00:59:32.193406 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 24 00:59:32.193411 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 24 00:59:32.193418 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 24 00:59:32.193424 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 00:59:32.193429 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 24 00:59:32.193435 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 24 00:59:32.193440 kernel: NX (Execute Disable) protection: active Jan 24 00:59:32.193447 kernel: APIC: Static calls initialized Jan 24 00:59:32.193455 kernel: SMBIOS 2.8 present. Jan 24 00:59:32.193461 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 24 00:59:32.193466 kernel: Hypervisor detected: KVM Jan 24 00:59:32.193472 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:59:32.193478 kernel: kvm-clock: using sched offset of 6046182954 cycles Jan 24 00:59:32.193484 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:59:32.193490 kernel: tsc: Detected 2445.426 MHz processor Jan 24 00:59:32.193496 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:59:32.193502 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:59:32.193508 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 24 00:59:32.193516 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 24 00:59:32.193522 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:59:32.193528 kernel: Using GB pages for direct mapping Jan 24 00:59:32.193534 kernel: ACPI: Early table checksum verification disabled Jan 24 00:59:32.193539 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 24 00:59:32.193545 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:59:32.193551 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:59:32.193557 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:59:32.193565 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 24 00:59:32.193571 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:59:32.193577 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:59:32.193583 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:59:32.193589 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:59:32.193594 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 24 00:59:32.193600 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 24 00:59:32.193610 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 24 00:59:32.193618 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 24 00:59:32.193624 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 24 00:59:32.193630 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 24 00:59:32.193637 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 24 00:59:32.193642 kernel: No NUMA configuration found Jan 24 00:59:32.193649 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 24 00:59:32.193655 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 24 00:59:32.193663 kernel: Zone ranges: Jan 24 00:59:32.193669 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:59:32.193675 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 24 00:59:32.193682 kernel: Normal empty Jan 24 00:59:32.193688 kernel: Movable zone start for each node Jan 24 00:59:32.193694 kernel: Early memory node ranges Jan 24 00:59:32.193700 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 24 00:59:32.193706 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 24 00:59:32.193712 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 24 00:59:32.193720 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:59:32.193726 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 24 00:59:32.193732 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 24 00:59:32.193738 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 00:59:32.193744 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:59:32.193750 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:59:32.193757 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 00:59:32.193763 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:59:32.193769 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:59:32.193777 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:59:32.193783 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:59:32.193789 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:59:32.193795 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:59:32.193801 kernel: TSC deadline timer available Jan 24 00:59:32.193808 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 24 00:59:32.193814 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:59:32.193820 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 24 00:59:32.193826 kernel: kvm-guest: setup PV sched yield Jan 24 00:59:32.193832 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 24 00:59:32.193840 kernel: Booting paravirtualized kernel on KVM Jan 24 00:59:32.193847 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:59:32.193853 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 24 00:59:32.193859 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 24 00:59:32.193866 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 24 00:59:32.193872 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 24 00:59:32.193877 kernel: kvm-guest: PV spinlocks enabled Jan 24 00:59:32.193884 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:59:32.193891 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:59:32.193899 kernel: random: crng init done Jan 24 00:59:32.193905 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:59:32.193912 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:59:32.193923 kernel: Fallback order for Node 0: 0 Jan 24 00:59:32.193934 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 24 00:59:32.193946 kernel: Policy zone: DMA32 Jan 24 00:59:32.193955 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:59:32.193964 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 136884K reserved, 0K cma-reserved) Jan 24 00:59:32.193977 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 24 00:59:32.193989 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:59:32.194001 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:59:32.194010 kernel: Dynamic Preempt: voluntary Jan 24 00:59:32.194018 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:59:32.194030 kernel: rcu: RCU event tracing is enabled. Jan 24 00:59:32.194041 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 24 00:59:32.194053 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:59:32.194059 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:59:32.194069 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:59:32.194075 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:59:32.194081 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 24 00:59:32.194087 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 24 00:59:32.194093 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:59:32.194099 kernel: Console: colour VGA+ 80x25 Jan 24 00:59:32.194106 kernel: printk: console [ttyS0] enabled Jan 24 00:59:32.194112 kernel: ACPI: Core revision 20230628 Jan 24 00:59:32.194118 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 24 00:59:32.194181 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:59:32.194188 kernel: x2apic enabled Jan 24 00:59:32.194194 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:59:32.194201 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 24 00:59:32.194207 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 24 00:59:32.194213 kernel: kvm-guest: setup PV IPIs Jan 24 00:59:32.194219 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 24 00:59:32.194299 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 24 00:59:32.194306 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 24 00:59:32.194313 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 00:59:32.194319 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 24 00:59:32.194326 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 24 00:59:32.194334 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:59:32.194341 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:59:32.194347 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:59:32.194354 kernel: Speculative Store Bypass: Vulnerable Jan 24 00:59:32.194360 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 24 00:59:32.194370 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 24 00:59:32.194376 kernel: active return thunk: srso_alias_return_thunk Jan 24 00:59:32.194383 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 24 00:59:32.194389 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 24 00:59:32.194395 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:59:32.194402 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:59:32.194408 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:59:32.194415 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:59:32.194424 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:59:32.194430 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 24 00:59:32.194437 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:59:32.194443 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:59:32.194450 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:59:32.194456 kernel: landlock: Up and running. Jan 24 00:59:32.194462 kernel: SELinux: Initializing. Jan 24 00:59:32.194469 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:59:32.194475 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:59:32.194484 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 24 00:59:32.194491 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:59:32.194497 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:59:32.194504 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:59:32.194510 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 24 00:59:32.194516 kernel: signal: max sigframe size: 1776 Jan 24 00:59:32.194523 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:59:32.194529 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:59:32.194536 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 00:59:32.194544 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:59:32.194551 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:59:32.194557 kernel: .... node #0, CPUs: #1 #2 #3 Jan 24 00:59:32.194563 kernel: smp: Brought up 1 node, 4 CPUs Jan 24 00:59:32.194570 kernel: smpboot: Max logical packages: 1 Jan 24 00:59:32.194576 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 24 00:59:32.194582 kernel: devtmpfs: initialized Jan 24 00:59:32.194589 kernel: x86/mm: Memory block size: 128MB Jan 24 00:59:32.194595 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:59:32.194604 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 24 00:59:32.194611 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:59:32.194617 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:59:32.194623 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:59:32.194630 kernel: audit: type=2000 audit(1769216369.795:1): state=initialized audit_enabled=0 res=1 Jan 24 00:59:32.194636 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:59:32.194643 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:59:32.194649 kernel: cpuidle: using governor menu Jan 24 00:59:32.194655 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:59:32.194664 kernel: dca service started, version 1.12.1 Jan 24 00:59:32.194671 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 24 00:59:32.194677 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 24 00:59:32.194683 kernel: PCI: Using configuration type 1 for base access Jan 24 00:59:32.194690 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:59:32.194696 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:59:32.194703 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:59:32.194709 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:59:32.194715 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:59:32.194724 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:59:32.194730 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:59:32.194737 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:59:32.194743 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:59:32.194749 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:59:32.194756 kernel: ACPI: Interpreter enabled Jan 24 00:59:32.194762 kernel: ACPI: PM: (supports S0 S3 S5) Jan 24 00:59:32.194769 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:59:32.194775 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:59:32.194784 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:59:32.194790 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 00:59:32.194797 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:59:32.194994 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:59:32.195217 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 24 00:59:32.195416 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 24 00:59:32.195426 kernel: PCI host bridge to bus 0000:00 Jan 24 00:59:32.195559 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:59:32.195671 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:59:32.195780 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:59:32.195889 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 24 00:59:32.196039 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 24 00:59:32.196309 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 24 00:59:32.196425 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:59:32.196571 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 24 00:59:32.196701 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 24 00:59:32.196844 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 24 00:59:32.197030 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 24 00:59:32.197306 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 24 00:59:32.197436 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:59:32.197617 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 24 00:59:32.197741 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 24 00:59:32.197859 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 24 00:59:32.198016 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 24 00:59:32.198333 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 24 00:59:32.198467 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 24 00:59:32.198586 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 24 00:59:32.198711 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 24 00:59:32.198838 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 24 00:59:32.198982 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 24 00:59:32.199196 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 24 00:59:32.199395 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 24 00:59:32.199518 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 24 00:59:32.199644 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 24 00:59:32.199769 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 00:59:32.199895 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 24 00:59:32.200067 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 24 00:59:32.200357 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 24 00:59:32.200496 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 24 00:59:32.200616 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 24 00:59:32.200631 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:59:32.200638 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:59:32.200644 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:59:32.200651 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:59:32.200657 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 00:59:32.200663 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 00:59:32.200670 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 00:59:32.200676 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 00:59:32.200683 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 00:59:32.200692 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 00:59:32.200699 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 00:59:32.200705 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 00:59:32.200711 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 00:59:32.200718 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 00:59:32.200724 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 00:59:32.200731 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 00:59:32.200737 kernel: iommu: Default domain type: Translated Jan 24 00:59:32.200744 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:59:32.200753 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:59:32.200759 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:59:32.200766 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 24 00:59:32.200773 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 24 00:59:32.200892 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 00:59:32.201063 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 00:59:32.201317 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:59:32.201329 kernel: vgaarb: loaded Jan 24 00:59:32.201336 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 24 00:59:32.201348 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 24 00:59:32.201354 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:59:32.201361 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:59:32.201367 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:59:32.201374 kernel: pnp: PnP ACPI init Jan 24 00:59:32.201510 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 24 00:59:32.201521 kernel: pnp: PnP ACPI: found 6 devices Jan 24 00:59:32.201528 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:59:32.201575 kernel: NET: Registered PF_INET protocol family Jan 24 00:59:32.201584 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:59:32.201590 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 24 00:59:32.201597 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:59:32.201604 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:59:32.201610 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 24 00:59:32.201644 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 24 00:59:32.201651 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:59:32.201657 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:59:32.201667 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:59:32.201673 kernel: NET: Registered PF_XDP protocol family Jan 24 00:59:32.201966 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:59:32.202199 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:59:32.202487 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:59:32.202602 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 24 00:59:32.202710 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 24 00:59:32.202818 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 24 00:59:32.202832 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:59:32.202838 kernel: Initialise system trusted keyrings Jan 24 00:59:32.202845 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 24 00:59:32.202851 kernel: Key type asymmetric registered Jan 24 00:59:32.202858 kernel: Asymmetric key parser 'x509' registered Jan 24 00:59:32.202864 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:59:32.202871 kernel: io scheduler mq-deadline registered Jan 24 00:59:32.202877 kernel: io scheduler kyber registered Jan 24 00:59:32.202883 kernel: io scheduler bfq registered Jan 24 00:59:32.202892 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:59:32.202899 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 00:59:32.202906 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 00:59:32.202912 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 24 00:59:32.202925 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:59:32.202939 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:59:32.202951 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:59:32.202961 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:59:32.202970 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:59:32.203206 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 24 00:59:32.203220 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:59:32.203407 kernel: rtc_cmos 00:04: registered as rtc0 Jan 24 00:59:32.203522 kernel: rtc_cmos 00:04: setting system clock to 2026-01-24T00:59:31 UTC (1769216371) Jan 24 00:59:32.203635 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 24 00:59:32.203644 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 24 00:59:32.203651 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:59:32.203658 kernel: Segment Routing with IPv6 Jan 24 00:59:32.203668 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:59:32.203675 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:59:32.203682 kernel: Key type dns_resolver registered Jan 24 00:59:32.203688 kernel: IPI shorthand broadcast: enabled Jan 24 00:59:32.203695 kernel: sched_clock: Marking stable (1483027451, 506881549)->(2531881613, -541972613) Jan 24 00:59:32.203702 kernel: registered taskstats version 1 Jan 24 00:59:32.203708 kernel: Loading compiled-in X.509 certificates Jan 24 00:59:32.203715 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:59:32.203721 kernel: Key type .fscrypt registered Jan 24 00:59:32.203730 kernel: Key type fscrypt-provisioning registered Jan 24 00:59:32.203737 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:59:32.203744 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:59:32.203750 kernel: ima: No architecture policies found Jan 24 00:59:32.203756 kernel: clk: Disabling unused clocks Jan 24 00:59:32.203763 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:59:32.203769 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:59:32.203776 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:59:32.203782 kernel: Run /init as init process Jan 24 00:59:32.203791 kernel: with arguments: Jan 24 00:59:32.203797 kernel: /init Jan 24 00:59:32.203804 kernel: with environment: Jan 24 00:59:32.203810 kernel: HOME=/ Jan 24 00:59:32.203816 kernel: TERM=linux Jan 24 00:59:32.203825 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:59:32.203833 systemd[1]: Detected virtualization kvm. Jan 24 00:59:32.203840 systemd[1]: Detected architecture x86-64. Jan 24 00:59:32.203849 systemd[1]: Running in initrd. Jan 24 00:59:32.203856 systemd[1]: No hostname configured, using default hostname. Jan 24 00:59:32.203862 systemd[1]: Hostname set to . Jan 24 00:59:32.203870 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:59:32.203877 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:59:32.203883 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:59:32.203891 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:59:32.203899 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:59:32.203908 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:59:32.203915 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:59:32.203923 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:59:32.203931 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:59:32.203938 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:59:32.203945 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:59:32.203954 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:59:32.203961 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:59:32.203968 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:59:32.203975 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:59:32.203993 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:59:32.204002 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:59:32.204010 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:59:32.204019 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:59:32.204026 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:59:32.204036 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:59:32.204043 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:59:32.204050 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:59:32.204057 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:59:32.204064 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:59:32.204072 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:59:32.204081 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:59:32.204088 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:59:32.204095 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:59:32.204102 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:59:32.204109 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:59:32.204188 systemd-journald[195]: Collecting audit messages is disabled. Jan 24 00:59:32.204212 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:59:32.204219 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:59:32.204286 systemd-journald[195]: Journal started Jan 24 00:59:32.204306 systemd-journald[195]: Runtime Journal (/run/log/journal/90b3d1869c734d998dc9f2908afb2566) is 6.0M, max 48.4M, 42.3M free. Jan 24 00:59:32.214372 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:59:32.219407 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:59:32.219475 systemd-modules-load[196]: Inserted module 'overlay' Jan 24 00:59:32.443952 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:59:32.443997 kernel: Bridge firewalling registered Jan 24 00:59:32.234566 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:59:32.255659 systemd-modules-load[196]: Inserted module 'br_netfilter' Jan 24 00:59:32.471476 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:59:32.481603 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:59:32.490617 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:59:32.500027 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:59:32.511289 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:59:32.530459 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:59:32.541184 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:59:32.550604 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:59:32.560495 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:59:32.562812 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:59:32.569872 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:59:32.571573 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:59:32.592920 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:59:32.609946 dracut-cmdline[235]: dracut-dracut-053 Jan 24 00:59:32.615950 dracut-cmdline[235]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:59:32.619949 systemd-resolved[232]: Positive Trust Anchors: Jan 24 00:59:32.619958 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:59:32.619984 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:59:32.622491 systemd-resolved[232]: Defaulting to hostname 'linux'. Jan 24 00:59:32.623658 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:59:32.638371 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:59:32.769322 kernel: SCSI subsystem initialized Jan 24 00:59:32.780358 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:59:32.796373 kernel: iscsi: registered transport (tcp) Jan 24 00:59:32.826764 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:59:32.826867 kernel: QLogic iSCSI HBA Driver Jan 24 00:59:32.882596 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:59:32.907495 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:59:32.942534 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:59:32.942579 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:59:32.946497 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:59:32.994395 kernel: raid6: avx2x4 gen() 34193 MB/s Jan 24 00:59:33.013414 kernel: raid6: avx2x2 gen() 25914 MB/s Jan 24 00:59:33.034223 kernel: raid6: avx2x1 gen() 24776 MB/s Jan 24 00:59:33.034388 kernel: raid6: using algorithm avx2x4 gen() 34193 MB/s Jan 24 00:59:33.054854 kernel: raid6: .... xor() 4397 MB/s, rmw enabled Jan 24 00:59:33.054895 kernel: raid6: using avx2x2 recovery algorithm Jan 24 00:59:33.078357 kernel: xor: automatically using best checksumming function avx Jan 24 00:59:33.260389 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:59:33.273782 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:59:33.288573 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:59:33.303678 systemd-udevd[418]: Using default interface naming scheme 'v255'. Jan 24 00:59:33.308879 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:59:33.318572 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:59:33.339851 dracut-pre-trigger[430]: rd.md=0: removing MD RAID activation Jan 24 00:59:33.376921 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:59:33.393468 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:59:33.475365 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:59:33.490456 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:59:33.510774 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:59:33.512455 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:59:33.519527 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:59:33.542491 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:59:33.559056 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:59:33.572174 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:59:33.589345 kernel: libata version 3.00 loaded. Jan 24 00:59:33.595283 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 24 00:59:33.595472 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:59:33.595894 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:59:33.596009 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:59:33.607960 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 24 00:59:33.630349 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:59:33.630425 kernel: GPT:9289727 != 19775487 Jan 24 00:59:33.630465 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:59:33.635060 kernel: GPT:9289727 != 19775487 Jan 24 00:59:33.635072 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:59:33.635361 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:59:33.635376 kernel: AES CTR mode by8 optimization enabled Jan 24 00:59:33.630112 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:59:33.666398 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 00:59:33.666740 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 00:59:33.666774 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 24 00:59:33.667000 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 00:59:33.634813 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:59:33.684589 kernel: scsi host0: ahci Jan 24 00:59:33.635059 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:59:33.709328 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (474) Jan 24 00:59:33.709355 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (467) Jan 24 00:59:33.709373 kernel: scsi host1: ahci Jan 24 00:59:33.709625 kernel: scsi host2: ahci Jan 24 00:59:33.709852 kernel: scsi host3: ahci Jan 24 00:59:33.710072 kernel: scsi host4: ahci Jan 24 00:59:33.710436 kernel: scsi host5: ahci Jan 24 00:59:33.710625 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 24 00:59:33.646406 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:59:33.733081 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 24 00:59:33.733096 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 24 00:59:33.733106 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 24 00:59:33.733116 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 24 00:59:33.733125 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 24 00:59:33.690881 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:59:33.741633 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:59:33.763517 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 24 00:59:33.985618 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:59:34.003356 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 24 00:59:34.020811 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:59:34.059118 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 00:59:34.059216 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 24 00:59:34.059325 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 00:59:34.059345 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 24 00:59:34.059369 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 00:59:34.059387 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 24 00:59:34.059400 kernel: ata3.00: applying bridge limits Jan 24 00:59:34.059413 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 00:59:34.059428 kernel: ata3.00: configured for UDMA/100 Jan 24 00:59:34.059444 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 24 00:59:34.039992 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 24 00:59:34.064487 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 24 00:59:34.092526 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:59:34.109443 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:59:34.109576 disk-uuid[570]: Primary Header is updated. Jan 24 00:59:34.109576 disk-uuid[570]: Secondary Entries is updated. Jan 24 00:59:34.109576 disk-uuid[570]: Secondary Header is updated. Jan 24 00:59:34.127006 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:59:34.136516 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:59:34.152492 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 24 00:59:34.152726 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 24 00:59:34.171305 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 24 00:59:34.174126 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:59:35.121346 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:59:35.121402 disk-uuid[571]: The operation has completed successfully. Jan 24 00:59:35.158413 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:59:35.158581 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:59:35.198656 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:59:35.212582 sh[596]: Success Jan 24 00:59:35.226411 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 24 00:59:35.282975 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:59:35.306929 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:59:35.318717 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:59:35.350736 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:59:35.350764 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:59:35.350774 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:59:35.350795 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:59:35.350804 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:59:35.361310 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:59:35.372213 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:59:35.399676 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:59:35.409794 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:59:35.437465 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:59:35.437532 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:59:35.437544 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:59:35.447418 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:59:35.462630 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:59:35.472840 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:59:35.485876 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:59:35.499518 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:59:35.573037 ignition[687]: Ignition 2.19.0 Jan 24 00:59:35.573049 ignition[687]: Stage: fetch-offline Jan 24 00:59:35.573296 ignition[687]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:59:35.573309 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:59:35.573423 ignition[687]: parsed url from cmdline: "" Jan 24 00:59:35.573427 ignition[687]: no config URL provided Jan 24 00:59:35.573433 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:59:35.573442 ignition[687]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:59:35.573468 ignition[687]: op(1): [started] loading QEMU firmware config module Jan 24 00:59:35.594926 unknown[687]: fetched base config from "system" Jan 24 00:59:35.573473 ignition[687]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 24 00:59:35.594938 unknown[687]: fetched user config from "qemu" Jan 24 00:59:35.585499 ignition[687]: op(1): [finished] loading QEMU firmware config module Jan 24 00:59:35.598847 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:59:35.587483 ignition[687]: parsing config with SHA512: e50ce3666cdfb3aaae5e2479a465f23b8754df9bcb845f03bd9d80b87da1204ff1fb169252c351dc2ac90da54ef0426f7dcce3bd92ceb466f3977cafc748b27a Jan 24 00:59:35.595534 ignition[687]: fetch-offline: fetch-offline passed Jan 24 00:59:35.595619 ignition[687]: Ignition finished successfully Jan 24 00:59:35.627083 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:59:35.645619 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:59:35.674495 systemd-networkd[786]: lo: Link UP Jan 24 00:59:35.674537 systemd-networkd[786]: lo: Gained carrier Jan 24 00:59:35.676304 systemd-networkd[786]: Enumeration completed Jan 24 00:59:35.676449 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:59:35.677656 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:59:35.677660 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:59:35.681816 systemd-networkd[786]: eth0: Link UP Jan 24 00:59:35.718309 ignition[789]: Ignition 2.19.0 Jan 24 00:59:35.681822 systemd-networkd[786]: eth0: Gained carrier Jan 24 00:59:35.718318 ignition[789]: Stage: kargs Jan 24 00:59:35.681831 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:59:35.718526 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:59:35.683911 systemd[1]: Reached target network.target - Network. Jan 24 00:59:35.718542 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:59:35.687980 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 24 00:59:35.719377 ignition[789]: kargs: kargs passed Jan 24 00:59:35.698531 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:59:35.719423 ignition[789]: Ignition finished successfully Jan 24 00:59:35.723891 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:59:35.763618 ignition[797]: Ignition 2.19.0 Jan 24 00:59:35.724359 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.161/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:59:35.763627 ignition[797]: Stage: disks Jan 24 00:59:35.730811 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:59:35.763834 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:59:35.848771 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 24 00:59:35.766689 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:59:35.763848 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:59:35.772496 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:59:35.764882 ignition[797]: disks: disks passed Jan 24 00:59:35.780313 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:59:35.764933 ignition[797]: Ignition finished successfully Jan 24 00:59:35.785379 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:59:35.785515 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:59:35.786342 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:59:35.814649 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:59:35.840136 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:59:35.872497 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:59:36.006334 kernel: EXT4-fs (vda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:59:36.006574 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:59:36.010584 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:59:36.032374 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:59:36.043451 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Jan 24 00:59:36.036768 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:59:36.068389 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:59:36.068420 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:59:36.068431 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:59:36.068442 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:59:36.054873 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 00:59:36.054928 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:59:36.054956 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:59:36.069912 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:59:36.079727 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:59:36.104470 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:59:36.148927 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:59:36.155638 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:59:36.166607 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:59:36.177083 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:59:36.311049 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:59:36.320438 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:59:36.321652 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:59:36.342503 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:59:36.351298 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:59:36.365797 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:59:36.385530 ignition[930]: INFO : Ignition 2.19.0 Jan 24 00:59:36.385530 ignition[930]: INFO : Stage: mount Jan 24 00:59:36.390916 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:59:36.390916 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:59:36.399572 ignition[930]: INFO : mount: mount passed Jan 24 00:59:36.402413 ignition[930]: INFO : Ignition finished successfully Jan 24 00:59:36.407665 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:59:36.425444 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:59:36.433853 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:59:36.456335 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (943) Jan 24 00:59:36.468154 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:59:36.468300 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:59:36.468312 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:59:36.480390 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:59:36.482019 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:59:36.531124 ignition[960]: INFO : Ignition 2.19.0 Jan 24 00:59:36.531124 ignition[960]: INFO : Stage: files Jan 24 00:59:36.539939 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:59:36.539939 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:59:36.539939 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:59:36.539939 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:59:36.539939 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:59:36.539939 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:59:36.539939 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:59:36.539939 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:59:36.539344 unknown[960]: wrote ssh authorized keys file for user: core Jan 24 00:59:36.589928 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 24 00:59:36.589928 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 24 00:59:36.589928 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:59:36.589928 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:59:36.589928 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:59:36.589928 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:59:36.589928 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:59:36.589928 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:59:36.589928 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:59:36.589928 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 24 00:59:36.764530 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Jan 24 00:59:37.201615 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:59:37.201615 ignition[960]: INFO : files: op(8): [started] processing unit "containerd.service" Jan 24 00:59:37.214818 ignition[960]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 24 00:59:37.224098 ignition[960]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 24 00:59:37.224098 ignition[960]: INFO : files: op(8): [finished] processing unit "containerd.service" Jan 24 00:59:37.224098 ignition[960]: INFO : files: op(a): [started] processing unit "coreos-metadata.service" Jan 24 00:59:37.224098 ignition[960]: INFO : files: op(a): op(b): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:59:37.224098 ignition[960]: INFO : files: op(a): op(b): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:59:37.224098 ignition[960]: INFO : files: op(a): [finished] processing unit "coreos-metadata.service" Jan 24 00:59:37.224098 ignition[960]: INFO : files: op(c): [started] setting preset to disabled for "coreos-metadata.service" Jan 24 00:59:37.301857 ignition[960]: INFO : files: op(c): op(d): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:59:37.318342 ignition[960]: INFO : files: op(c): op(d): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:59:37.325492 ignition[960]: INFO : files: op(c): [finished] setting preset to disabled for "coreos-metadata.service" Jan 24 00:59:37.325492 ignition[960]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:59:37.325492 ignition[960]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:59:37.325492 ignition[960]: INFO : files: files passed Jan 24 00:59:37.325492 ignition[960]: INFO : Ignition finished successfully Jan 24 00:59:37.328906 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:59:37.350566 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:59:37.367555 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:59:37.372134 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:59:37.372443 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:59:37.394908 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Jan 24 00:59:37.401479 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:59:37.401479 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:59:37.416700 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:59:37.426955 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:59:37.427423 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:59:37.457749 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:59:37.489963 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:59:37.490159 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:59:37.500617 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:59:37.511339 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:59:37.515802 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:59:37.516771 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:59:37.545859 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:59:37.547876 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:59:37.565879 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:59:37.566278 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:59:37.566887 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:59:37.758764 ignition[1015]: INFO : Ignition 2.19.0 Jan 24 00:59:37.758764 ignition[1015]: INFO : Stage: umount Jan 24 00:59:37.758764 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:59:37.758764 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:59:37.758764 ignition[1015]: INFO : umount: umount passed Jan 24 00:59:37.758764 ignition[1015]: INFO : Ignition finished successfully Jan 24 00:59:37.568209 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:59:37.568395 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:59:37.570129 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:59:37.572072 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:59:37.573357 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:59:37.573881 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:59:37.575158 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:59:37.577076 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:59:37.579002 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:59:37.580387 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:59:37.580939 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:59:37.582302 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:59:37.582848 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:59:37.582978 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:59:37.585504 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:59:37.586103 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:59:37.587958 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:59:37.588422 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:59:37.588621 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:59:37.588743 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:59:37.589864 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:59:37.589996 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:59:37.591353 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:59:37.591809 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:59:37.595446 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:59:37.595725 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:59:37.597125 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:59:37.598421 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:59:37.598546 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:59:37.598984 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:59:37.599363 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:59:38.039399 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Jan 24 00:59:37.599686 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:59:37.599817 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:59:37.600508 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:59:37.600714 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:59:37.602304 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:59:37.603807 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:59:37.604060 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:59:37.604376 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:59:37.604827 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:59:37.604968 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:59:37.609980 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:59:37.610218 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:59:37.633632 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:59:37.638998 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:59:37.639143 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:59:37.641218 systemd[1]: Stopped target network.target - Network. Jan 24 00:59:37.641691 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:59:37.641745 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:59:37.642986 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:59:37.643042 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:59:37.644324 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:59:37.644369 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:59:37.644891 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:59:37.644933 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:59:37.646401 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:59:37.646974 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:59:37.740595 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:59:37.740774 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:59:37.750449 systemd-networkd[786]: eth0: DHCPv6 lease lost Jan 24 00:59:37.751386 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:59:37.751503 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:59:37.758774 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:59:37.758964 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:59:37.759913 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:59:37.759975 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:59:37.775496 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:59:37.779029 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:59:37.779154 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:59:37.782299 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:59:37.782348 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:59:37.787579 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:59:37.787626 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:59:37.789135 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:59:37.797106 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:59:37.797388 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:59:37.799578 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:59:37.799628 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:59:37.813385 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:59:37.813525 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:59:37.830532 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:59:37.830820 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:59:37.840136 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:59:37.840317 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:59:37.848416 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:59:37.848460 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:59:37.853006 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:59:37.853063 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:59:37.857587 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:59:37.857642 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:59:37.865954 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:59:37.866011 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:59:37.890512 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:59:37.899936 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:59:37.899997 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:59:37.910480 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 24 00:59:37.910532 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:59:37.916365 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:59:37.916416 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:59:37.925609 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:59:37.925660 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:59:37.930995 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:59:37.931136 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:59:37.940492 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:59:37.967521 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:59:37.978713 systemd[1]: Switching root. Jan 24 00:59:38.142781 systemd-journald[195]: Journal stopped Jan 24 00:59:39.432663 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:59:39.432741 kernel: SELinux: policy capability open_perms=1 Jan 24 00:59:39.432759 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:59:39.432769 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:59:39.432786 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:59:39.432796 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:59:39.432806 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:59:39.432821 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:59:39.432832 kernel: audit: type=1403 audit(1769216378.289:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 00:59:39.432846 systemd[1]: Successfully loaded SELinux policy in 71.370ms. Jan 24 00:59:39.432865 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.826ms. Jan 24 00:59:39.432879 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:59:39.432892 systemd[1]: Detected virtualization kvm. Jan 24 00:59:39.432903 systemd[1]: Detected architecture x86-64. Jan 24 00:59:39.432914 systemd[1]: Detected first boot. Jan 24 00:59:39.432927 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:59:39.432937 zram_generator::config[1074]: No configuration found. Jan 24 00:59:39.432949 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:59:39.432960 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:59:39.432971 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 24 00:59:39.432982 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:59:39.432992 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:59:39.433003 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:59:39.433016 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:59:39.433027 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:59:39.433038 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:59:39.433049 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:59:39.433059 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:59:39.433070 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:59:39.433081 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:59:39.433092 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:59:39.433102 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:59:39.433122 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:59:39.433133 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:59:39.433143 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:59:39.433154 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:59:39.433165 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:59:39.433176 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:59:39.433297 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:59:39.433313 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:59:39.433324 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:59:39.433338 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:59:39.433349 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:59:39.433360 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:59:39.433371 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:59:39.433382 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:59:39.433392 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:59:39.433403 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:59:39.433413 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:59:39.433426 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:59:39.433437 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:59:39.433448 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:59:39.433459 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:59:39.433471 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:59:39.433481 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:59:39.433492 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:59:39.433503 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:59:39.433516 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:59:39.433527 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:59:39.433538 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:59:39.433548 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:59:39.433558 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:59:39.433569 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:59:39.433579 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:59:39.433590 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:59:39.433600 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:59:39.433614 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 24 00:59:39.433625 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 24 00:59:39.433635 kernel: ACPI: bus type drm_connector registered Jan 24 00:59:39.433645 kernel: fuse: init (API version 7.39) Jan 24 00:59:39.433656 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:59:39.433666 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:59:39.433677 kernel: loop: module loaded Jan 24 00:59:39.433687 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:59:39.433720 systemd-journald[1174]: Collecting audit messages is disabled. Jan 24 00:59:39.433741 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:59:39.433752 systemd-journald[1174]: Journal started Jan 24 00:59:39.433770 systemd-journald[1174]: Runtime Journal (/run/log/journal/90b3d1869c734d998dc9f2908afb2566) is 6.0M, max 48.4M, 42.3M free. Jan 24 00:59:39.451994 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:59:39.452049 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:59:39.458323 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:59:39.464835 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:59:39.469038 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:59:39.473702 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:59:39.477688 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:59:39.482063 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:59:39.486469 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:59:39.490661 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:59:39.495565 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:59:39.500652 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:59:39.500931 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:59:39.505752 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:59:39.505992 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:59:39.510721 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:59:39.510959 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:59:39.515408 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:59:39.515630 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:59:39.520583 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:59:39.520804 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:59:39.525176 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:59:39.525744 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:59:39.530333 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:59:39.534811 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:59:39.539866 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:59:39.556489 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:59:39.569335 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:59:39.574812 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:59:39.579116 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:59:39.581826 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:59:39.587885 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:59:39.592324 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:59:39.596395 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:59:39.601525 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:59:39.604508 systemd-journald[1174]: Time spent on flushing to /var/log/journal/90b3d1869c734d998dc9f2908afb2566 is 11.301ms for 912 entries. Jan 24 00:59:39.604508 systemd-journald[1174]: System Journal (/var/log/journal/90b3d1869c734d998dc9f2908afb2566) is 8.0M, max 195.6M, 187.6M free. Jan 24 00:59:39.634468 systemd-journald[1174]: Received client request to flush runtime journal. Jan 24 00:59:39.605137 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:59:39.615473 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:59:39.622466 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:59:39.627752 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:59:39.633603 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:59:39.640407 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:59:39.645691 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:59:39.653160 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:59:39.670654 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 00:59:39.677793 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:59:39.687049 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Jan 24 00:59:39.687087 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Jan 24 00:59:39.691591 udevadm[1223]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 24 00:59:39.695084 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:59:39.714386 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:59:39.751695 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:59:39.765394 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:59:39.783044 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Jan 24 00:59:39.783094 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Jan 24 00:59:39.789838 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:59:40.033795 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:59:40.051405 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:59:40.078491 systemd-udevd[1240]: Using default interface naming scheme 'v255'. Jan 24 00:59:40.105626 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:59:40.120417 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:59:40.137395 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:59:40.153598 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 24 00:59:40.194325 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1251) Jan 24 00:59:40.218284 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 24 00:59:40.226075 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:59:40.241186 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:59:40.234696 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:59:40.246798 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 00:59:40.247029 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 24 00:59:40.252904 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 00:59:40.284299 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 24 00:59:40.312353 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:59:40.324421 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:59:40.334392 systemd-networkd[1252]: lo: Link UP Jan 24 00:59:40.334429 systemd-networkd[1252]: lo: Gained carrier Jan 24 00:59:40.336074 systemd-networkd[1252]: Enumeration completed Jan 24 00:59:40.336899 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:59:40.337546 systemd-networkd[1252]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:59:40.337551 systemd-networkd[1252]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:59:40.338900 systemd-networkd[1252]: eth0: Link UP Jan 24 00:59:40.338905 systemd-networkd[1252]: eth0: Gained carrier Jan 24 00:59:40.338915 systemd-networkd[1252]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:59:40.373596 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:59:40.450379 systemd-networkd[1252]: eth0: DHCPv4 address 10.0.0.161/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:59:40.469434 kernel: kvm_amd: TSC scaling supported Jan 24 00:59:40.469609 kernel: kvm_amd: Nested Virtualization enabled Jan 24 00:59:40.469670 kernel: kvm_amd: Nested Paging enabled Jan 24 00:59:40.469683 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 24 00:59:40.469971 kernel: kvm_amd: PMU virtualization is disabled Jan 24 00:59:40.539357 kernel: EDAC MC: Ver: 3.0.0 Jan 24 00:59:40.578985 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 00:59:40.712425 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:59:40.730617 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 00:59:40.741798 lvm[1287]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:59:40.777956 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 00:59:40.783768 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:59:40.800393 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 00:59:40.811012 lvm[1290]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:59:40.846528 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 00:59:40.851419 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:59:40.855952 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:59:40.856011 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:59:40.859716 systemd[1]: Reached target machines.target - Containers. Jan 24 00:59:40.864332 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 00:59:40.877391 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:59:40.882982 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:59:40.886768 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:59:40.887829 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:59:40.893434 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 00:59:40.900350 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:59:40.901731 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:59:40.911530 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:59:40.919396 kernel: loop0: detected capacity change from 0 to 140768 Jan 24 00:59:40.927599 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:59:40.928454 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 00:59:40.947358 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:59:40.973314 kernel: loop1: detected capacity change from 0 to 142488 Jan 24 00:59:41.037376 kernel: loop2: detected capacity change from 0 to 224512 Jan 24 00:59:41.084359 kernel: loop3: detected capacity change from 0 to 140768 Jan 24 00:59:41.109337 kernel: loop4: detected capacity change from 0 to 142488 Jan 24 00:59:41.131330 kernel: loop5: detected capacity change from 0 to 224512 Jan 24 00:59:41.146866 (sd-merge)[1310]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 24 00:59:41.147683 (sd-merge)[1310]: Merged extensions into '/usr'. Jan 24 00:59:41.152040 systemd[1]: Reloading requested from client PID 1298 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:59:41.152085 systemd[1]: Reloading... Jan 24 00:59:41.213804 zram_generator::config[1338]: No configuration found. Jan 24 00:59:41.242125 ldconfig[1295]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:59:41.343319 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:59:41.410827 systemd[1]: Reloading finished in 258 ms. Jan 24 00:59:41.432490 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:59:41.437879 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:59:41.463546 systemd[1]: Starting ensure-sysext.service... Jan 24 00:59:41.468069 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:59:41.475996 systemd[1]: Reloading requested from client PID 1382 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:59:41.476050 systemd[1]: Reloading... Jan 24 00:59:41.493843 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:59:41.494353 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 00:59:41.495425 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 00:59:41.495721 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Jan 24 00:59:41.495827 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Jan 24 00:59:41.499571 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:59:41.499631 systemd-tmpfiles[1383]: Skipping /boot Jan 24 00:59:41.515101 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:59:41.515116 systemd-tmpfiles[1383]: Skipping /boot Jan 24 00:59:41.542510 zram_generator::config[1412]: No configuration found. Jan 24 00:59:41.669092 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:59:41.737069 systemd[1]: Reloading finished in 260 ms. Jan 24 00:59:41.760496 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:59:41.782413 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:59:41.788447 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:59:41.794069 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:59:41.797594 systemd-networkd[1252]: eth0: Gained IPv6LL Jan 24 00:59:41.801504 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:59:41.818609 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:59:41.824536 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:59:41.835065 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:59:41.844664 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:59:41.844926 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:59:41.853496 augenrules[1482]: No rules Jan 24 00:59:41.852823 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:59:41.861555 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:59:41.868614 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:59:41.872544 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:59:41.875525 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:59:41.879519 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:59:41.881061 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:59:41.885891 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:59:41.886104 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:59:41.890912 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:59:41.891163 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:59:41.896815 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:59:41.902176 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:59:41.902570 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:59:41.907662 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:59:41.914884 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:59:41.924635 systemd-resolved[1461]: Positive Trust Anchors: Jan 24 00:59:41.924684 systemd-resolved[1461]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:59:41.924712 systemd-resolved[1461]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:59:41.928580 systemd-resolved[1461]: Defaulting to hostname 'linux'. Jan 24 00:59:41.929150 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:59:41.929594 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:59:41.939654 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:59:41.945658 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:59:41.952827 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:59:41.956964 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:59:41.957160 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:59:41.957409 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:59:41.958006 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:59:41.963494 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:59:41.963744 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:59:41.969409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:59:41.969707 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:59:41.975015 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:59:41.975394 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:59:41.984860 systemd[1]: Reached target network.target - Network. Jan 24 00:59:41.988687 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:59:41.993138 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:59:41.998016 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:59:41.998327 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:59:42.010717 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:59:42.016147 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:59:42.022038 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:59:42.027944 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:59:42.032517 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:59:42.032739 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:59:42.032906 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:59:42.034862 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:59:42.035145 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:59:42.041012 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:59:42.041522 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:59:42.047736 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:59:42.048090 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:59:42.053751 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:59:42.054032 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:59:42.061534 systemd[1]: Finished ensure-sysext.service. Jan 24 00:59:42.070076 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:59:42.070185 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:59:42.082611 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 24 00:59:42.144810 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 24 00:59:42.149900 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:59:42.755238 systemd-timesyncd[1532]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 24 00:59:42.755290 systemd-resolved[1461]: Clock change detected. Flushing caches. Jan 24 00:59:42.755321 systemd-timesyncd[1532]: Initial clock synchronization to Sat 2026-01-24 00:59:42.755115 UTC. Jan 24 00:59:42.759130 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:59:42.764427 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:59:42.769352 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:59:42.774302 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:59:42.774365 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:59:42.778092 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:59:42.782921 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:59:42.787304 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:59:42.792600 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:59:42.798197 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:59:42.804354 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:59:42.809765 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:59:42.820974 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:59:42.825255 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:59:42.829368 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:59:42.833923 systemd[1]: System is tainted: cgroupsv1 Jan 24 00:59:42.834006 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:59:42.834034 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:59:42.836080 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:59:42.842280 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 24 00:59:42.848012 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:59:42.853284 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:59:42.859131 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:59:42.863310 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:59:42.864282 jq[1541]: false Jan 24 00:59:42.866093 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:59:42.872038 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:59:42.883987 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:59:42.889380 dbus-daemon[1539]: [system] SELinux support is enabled Jan 24 00:59:42.889946 extend-filesystems[1542]: Found loop3 Jan 24 00:59:42.893789 extend-filesystems[1542]: Found loop4 Jan 24 00:59:42.893789 extend-filesystems[1542]: Found loop5 Jan 24 00:59:42.893789 extend-filesystems[1542]: Found sr0 Jan 24 00:59:42.893789 extend-filesystems[1542]: Found vda Jan 24 00:59:42.893789 extend-filesystems[1542]: Found vda1 Jan 24 00:59:42.893789 extend-filesystems[1542]: Found vda2 Jan 24 00:59:42.893789 extend-filesystems[1542]: Found vda3 Jan 24 00:59:42.893789 extend-filesystems[1542]: Found usr Jan 24 00:59:42.893789 extend-filesystems[1542]: Found vda4 Jan 24 00:59:42.893789 extend-filesystems[1542]: Found vda6 Jan 24 00:59:42.893789 extend-filesystems[1542]: Found vda7 Jan 24 00:59:42.893789 extend-filesystems[1542]: Found vda9 Jan 24 00:59:42.893789 extend-filesystems[1542]: Checking size of /dev/vda9 Jan 24 00:59:42.946002 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1255) Jan 24 00:59:42.892379 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:59:42.946133 extend-filesystems[1542]: Resized partition /dev/vda9 Jan 24 00:59:42.897923 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:59:42.958327 extend-filesystems[1567]: resize2fs 1.47.1 (20-May-2024) Jan 24 00:59:42.968857 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 24 00:59:42.916873 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:59:42.917431 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:59:42.918938 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:59:42.960829 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:59:42.966107 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:59:42.977615 jq[1565]: true Jan 24 00:59:42.983027 update_engine[1561]: I20260124 00:59:42.982964 1561 main.cc:92] Flatcar Update Engine starting Jan 24 00:59:42.984827 update_engine[1561]: I20260124 00:59:42.984799 1561 update_check_scheduler.cc:74] Next update check in 9m46s Jan 24 00:59:42.986290 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:59:42.986648 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:59:42.989230 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:59:42.989514 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:59:42.994924 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:59:43.002260 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:59:43.003171 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:59:43.021865 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 24 00:59:43.024199 (ntainerd)[1583]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 00:59:43.052043 jq[1582]: true Jan 24 00:59:43.041789 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 24 00:59:43.042119 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 24 00:59:43.059596 extend-filesystems[1567]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 24 00:59:43.059596 extend-filesystems[1567]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 24 00:59:43.059596 extend-filesystems[1567]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 24 00:59:43.115799 extend-filesystems[1542]: Resized filesystem in /dev/vda9 Jan 24 00:59:43.120586 bash[1616]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:59:43.059922 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:59:43.060225 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:59:43.070343 systemd-logind[1559]: Watching system buttons on /dev/input/event1 (Power Button) Jan 24 00:59:43.070365 systemd-logind[1559]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:59:43.070804 systemd-logind[1559]: New seat seat0. Jan 24 00:59:43.084050 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:59:43.089425 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:59:43.096921 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:59:43.097136 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:59:43.097250 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:59:43.106116 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:59:43.106356 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:59:43.107660 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:59:43.125037 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:59:43.144838 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:59:43.153450 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 24 00:59:43.168900 locksmithd[1617]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:59:43.188473 sshd_keygen[1576]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:59:43.219131 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:59:43.234052 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:59:43.248996 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:59:43.249360 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:59:43.262959 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:59:43.280990 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:59:43.291119 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:59:43.297997 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:59:43.304395 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:59:43.317590 containerd[1583]: time="2026-01-24T00:59:43.317377558Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 00:59:43.341528 containerd[1583]: time="2026-01-24T00:59:43.341385631Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:59:43.344572 containerd[1583]: time="2026-01-24T00:59:43.344486338Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:59:43.344572 containerd[1583]: time="2026-01-24T00:59:43.344546079Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 00:59:43.344572 containerd[1583]: time="2026-01-24T00:59:43.344563151Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 00:59:43.344881 containerd[1583]: time="2026-01-24T00:59:43.344818978Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 00:59:43.344881 containerd[1583]: time="2026-01-24T00:59:43.344871406Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 00:59:43.344983 containerd[1583]: time="2026-01-24T00:59:43.344937800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:59:43.344983 containerd[1583]: time="2026-01-24T00:59:43.344980490Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:59:43.345303 containerd[1583]: time="2026-01-24T00:59:43.345238031Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:59:43.345303 containerd[1583]: time="2026-01-24T00:59:43.345286381Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 00:59:43.345351 containerd[1583]: time="2026-01-24T00:59:43.345304124Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:59:43.345351 containerd[1583]: time="2026-01-24T00:59:43.345314404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 00:59:43.345467 containerd[1583]: time="2026-01-24T00:59:43.345405093Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:59:43.345838 containerd[1583]: time="2026-01-24T00:59:43.345635563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:59:43.346015 containerd[1583]: time="2026-01-24T00:59:43.345941223Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:59:43.346015 containerd[1583]: time="2026-01-24T00:59:43.345994723Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 00:59:43.346135 containerd[1583]: time="2026-01-24T00:59:43.346092485Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 00:59:43.346216 containerd[1583]: time="2026-01-24T00:59:43.346176332Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:59:43.352925 containerd[1583]: time="2026-01-24T00:59:43.352874379Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 00:59:43.353042 containerd[1583]: time="2026-01-24T00:59:43.352984084Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 00:59:43.353042 containerd[1583]: time="2026-01-24T00:59:43.353035741Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 00:59:43.353096 containerd[1583]: time="2026-01-24T00:59:43.353054305Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 00:59:43.353202 containerd[1583]: time="2026-01-24T00:59:43.353160012Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 00:59:43.353387 containerd[1583]: time="2026-01-24T00:59:43.353336773Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 00:59:43.354122 containerd[1583]: time="2026-01-24T00:59:43.354038702Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 00:59:43.354335 containerd[1583]: time="2026-01-24T00:59:43.354169938Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 00:59:43.354335 containerd[1583]: time="2026-01-24T00:59:43.354235009Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 00:59:43.354335 containerd[1583]: time="2026-01-24T00:59:43.354249977Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 00:59:43.354335 containerd[1583]: time="2026-01-24T00:59:43.354264674Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 00:59:43.354335 containerd[1583]: time="2026-01-24T00:59:43.354276456Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 00:59:43.354335 containerd[1583]: time="2026-01-24T00:59:43.354294360Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 00:59:43.354335 containerd[1583]: time="2026-01-24T00:59:43.354310890Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 00:59:43.354335 containerd[1583]: time="2026-01-24T00:59:43.354323884Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 00:59:43.354335 containerd[1583]: time="2026-01-24T00:59:43.354340576Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 00:59:43.354483 containerd[1583]: time="2026-01-24T00:59:43.354352348Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 00:59:43.354483 containerd[1583]: time="2026-01-24T00:59:43.354363308Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 00:59:43.354483 containerd[1583]: time="2026-01-24T00:59:43.354380821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 00:59:43.354483 containerd[1583]: time="2026-01-24T00:59:43.354394517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 00:59:43.354483 containerd[1583]: time="2026-01-24T00:59:43.354405437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 00:59:43.354483 containerd[1583]: time="2026-01-24T00:59:43.354420966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 00:59:43.354483 containerd[1583]: time="2026-01-24T00:59:43.354440673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 00:59:43.354483 containerd[1583]: time="2026-01-24T00:59:43.354452635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 00:59:43.354483 containerd[1583]: time="2026-01-24T00:59:43.354462594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 00:59:43.354483 containerd[1583]: time="2026-01-24T00:59:43.354472893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 00:59:43.354483 containerd[1583]: time="2026-01-24T00:59:43.354484315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 00:59:43.354652 containerd[1583]: time="2026-01-24T00:59:43.354497689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 00:59:43.354652 containerd[1583]: time="2026-01-24T00:59:43.354508499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 00:59:43.354652 containerd[1583]: time="2026-01-24T00:59:43.354519590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 00:59:43.354652 containerd[1583]: time="2026-01-24T00:59:43.354532845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 00:59:43.354652 containerd[1583]: time="2026-01-24T00:59:43.354546992Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 00:59:43.354652 containerd[1583]: time="2026-01-24T00:59:43.354564243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 00:59:43.354652 containerd[1583]: time="2026-01-24T00:59:43.354574302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 00:59:43.354652 containerd[1583]: time="2026-01-24T00:59:43.354583990Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 00:59:43.354652 containerd[1583]: time="2026-01-24T00:59:43.354622121Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 00:59:43.354652 containerd[1583]: time="2026-01-24T00:59:43.354635487Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 00:59:43.354652 containerd[1583]: time="2026-01-24T00:59:43.354645445Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 00:59:43.354652 containerd[1583]: time="2026-01-24T00:59:43.354655904Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 00:59:43.354652 containerd[1583]: time="2026-01-24T00:59:43.354754087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 00:59:43.354965 containerd[1583]: time="2026-01-24T00:59:43.354769888Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 00:59:43.354965 containerd[1583]: time="2026-01-24T00:59:43.354785567Z" level=info msg="NRI interface is disabled by configuration." Jan 24 00:59:43.354965 containerd[1583]: time="2026-01-24T00:59:43.354795996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 00:59:43.356214 containerd[1583]: time="2026-01-24T00:59:43.356063267Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 00:59:43.356214 containerd[1583]: time="2026-01-24T00:59:43.356177499Z" level=info msg="Connect containerd service" Jan 24 00:59:43.356401 containerd[1583]: time="2026-01-24T00:59:43.356245607Z" level=info msg="using legacy CRI server" Jan 24 00:59:43.356401 containerd[1583]: time="2026-01-24T00:59:43.356359976Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:59:43.356764 containerd[1583]: time="2026-01-24T00:59:43.356586078Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 00:59:43.358082 containerd[1583]: time="2026-01-24T00:59:43.357828953Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:59:43.358800 containerd[1583]: time="2026-01-24T00:59:43.358209818Z" level=info msg="Start subscribing containerd event" Jan 24 00:59:43.358800 containerd[1583]: time="2026-01-24T00:59:43.358234539Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:59:43.358800 containerd[1583]: time="2026-01-24T00:59:43.358323200Z" level=info msg="Start recovering state" Jan 24 00:59:43.358800 containerd[1583]: time="2026-01-24T00:59:43.358353110Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:59:43.358800 containerd[1583]: time="2026-01-24T00:59:43.358388021Z" level=info msg="Start event monitor" Jan 24 00:59:43.358800 containerd[1583]: time="2026-01-24T00:59:43.358400013Z" level=info msg="Start snapshots syncer" Jan 24 00:59:43.358800 containerd[1583]: time="2026-01-24T00:59:43.358408428Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:59:43.358800 containerd[1583]: time="2026-01-24T00:59:43.358415752Z" level=info msg="Start streaming server" Jan 24 00:59:43.358757 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:59:43.359510 containerd[1583]: time="2026-01-24T00:59:43.359493243Z" level=info msg="containerd successfully booted in 0.043235s" Jan 24 00:59:43.904809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:59:43.910514 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:59:43.915175 systemd[1]: Startup finished in 8.128s (kernel) + 5.091s (userspace) = 13.220s. Jan 24 00:59:43.916665 (kubelet)[1663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:59:44.400368 kubelet[1663]: E0124 00:59:44.400245 1663 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:59:44.403615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:59:44.404046 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:59:46.874245 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:59:46.884020 systemd[1]: Started sshd@0-10.0.0.161:22-10.0.0.1:35574.service - OpenSSH per-connection server daemon (10.0.0.1:35574). Jan 24 00:59:46.934481 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 35574 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:59:46.937009 sshd[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:46.948283 systemd-logind[1559]: New session 1 of user core. Jan 24 00:59:46.949335 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:59:46.959008 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:59:46.973052 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:59:46.975460 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:59:46.985148 (systemd)[1681]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 00:59:47.095374 systemd[1681]: Queued start job for default target default.target. Jan 24 00:59:47.095918 systemd[1681]: Created slice app.slice - User Application Slice. Jan 24 00:59:47.095936 systemd[1681]: Reached target paths.target - Paths. Jan 24 00:59:47.095948 systemd[1681]: Reached target timers.target - Timers. Jan 24 00:59:47.107868 systemd[1681]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:59:47.115352 systemd[1681]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:59:47.115438 systemd[1681]: Reached target sockets.target - Sockets. Jan 24 00:59:47.115451 systemd[1681]: Reached target basic.target - Basic System. Jan 24 00:59:47.115489 systemd[1681]: Reached target default.target - Main User Target. Jan 24 00:59:47.115522 systemd[1681]: Startup finished in 122ms. Jan 24 00:59:47.116094 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:59:47.117868 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:59:47.174154 systemd[1]: Started sshd@1-10.0.0.161:22-10.0.0.1:35578.service - OpenSSH per-connection server daemon (10.0.0.1:35578). Jan 24 00:59:47.214049 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 35578 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:59:47.215474 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:47.221420 systemd-logind[1559]: New session 2 of user core. Jan 24 00:59:47.231062 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 00:59:47.288945 sshd[1694]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:47.297045 systemd[1]: Started sshd@2-10.0.0.161:22-10.0.0.1:35580.service - OpenSSH per-connection server daemon (10.0.0.1:35580). Jan 24 00:59:47.297482 systemd[1]: sshd@1-10.0.0.161:22-10.0.0.1:35578.service: Deactivated successfully. Jan 24 00:59:47.299980 systemd-logind[1559]: Session 2 logged out. Waiting for processes to exit. Jan 24 00:59:47.300593 systemd[1]: session-2.scope: Deactivated successfully. Jan 24 00:59:47.302935 systemd-logind[1559]: Removed session 2. Jan 24 00:59:47.328123 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 35580 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:59:47.329657 sshd[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:47.334567 systemd-logind[1559]: New session 3 of user core. Jan 24 00:59:47.350035 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:59:47.403069 sshd[1699]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:47.412083 systemd[1]: Started sshd@3-10.0.0.161:22-10.0.0.1:35588.service - OpenSSH per-connection server daemon (10.0.0.1:35588). Jan 24 00:59:47.412551 systemd[1]: sshd@2-10.0.0.161:22-10.0.0.1:35580.service: Deactivated successfully. Jan 24 00:59:47.415128 systemd-logind[1559]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:59:47.415647 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:59:47.418243 systemd-logind[1559]: Removed session 3. Jan 24 00:59:47.440068 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 35588 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:59:47.441797 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:47.447508 systemd-logind[1559]: New session 4 of user core. Jan 24 00:59:47.457052 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:59:47.514810 sshd[1707]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:47.528989 systemd[1]: Started sshd@4-10.0.0.161:22-10.0.0.1:35602.service - OpenSSH per-connection server daemon (10.0.0.1:35602). Jan 24 00:59:47.529518 systemd[1]: sshd@3-10.0.0.161:22-10.0.0.1:35588.service: Deactivated successfully. Jan 24 00:59:47.532226 systemd-logind[1559]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:59:47.533172 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:59:47.535005 systemd-logind[1559]: Removed session 4. Jan 24 00:59:47.559081 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 35602 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:59:47.560817 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:47.566341 systemd-logind[1559]: New session 5 of user core. Jan 24 00:59:47.575328 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:59:47.639673 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:59:47.640157 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:59:47.655045 sudo[1722]: pam_unix(sudo:session): session closed for user root Jan 24 00:59:47.657490 sshd[1715]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:47.668985 systemd[1]: Started sshd@5-10.0.0.161:22-10.0.0.1:35604.service - OpenSSH per-connection server daemon (10.0.0.1:35604). Jan 24 00:59:47.669516 systemd[1]: sshd@4-10.0.0.161:22-10.0.0.1:35602.service: Deactivated successfully. Jan 24 00:59:47.672105 systemd-logind[1559]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:59:47.672637 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:59:47.674885 systemd-logind[1559]: Removed session 5. Jan 24 00:59:47.697082 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 35604 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:59:47.698327 sshd[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:47.703232 systemd-logind[1559]: New session 6 of user core. Jan 24 00:59:47.716038 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:59:47.774078 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:59:47.774436 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:59:47.779944 sudo[1732]: pam_unix(sudo:session): session closed for user root Jan 24 00:59:47.787947 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 00:59:47.788327 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:59:47.810014 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 00:59:47.812988 auditctl[1735]: No rules Jan 24 00:59:47.814206 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:59:47.814526 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 00:59:47.816870 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:59:47.859195 augenrules[1754]: No rules Jan 24 00:59:47.860826 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:59:47.862325 sudo[1731]: pam_unix(sudo:session): session closed for user root Jan 24 00:59:47.864490 sshd[1724]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:47.886314 systemd[1]: Started sshd@6-10.0.0.161:22-10.0.0.1:35616.service - OpenSSH per-connection server daemon (10.0.0.1:35616). Jan 24 00:59:47.887444 systemd[1]: sshd@5-10.0.0.161:22-10.0.0.1:35604.service: Deactivated successfully. Jan 24 00:59:47.889139 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:59:47.890128 systemd-logind[1559]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:59:47.892542 systemd-logind[1559]: Removed session 6. Jan 24 00:59:47.913642 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 35616 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:59:47.915368 sshd[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:47.920586 systemd-logind[1559]: New session 7 of user core. Jan 24 00:59:47.930039 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:59:47.986225 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:59:47.986589 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:59:48.013067 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 24 00:59:48.036970 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 24 00:59:48.037325 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 24 00:59:48.581448 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:59:48.593022 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:59:48.622949 systemd[1]: Reloading requested from client PID 1813 ('systemctl') (unit session-7.scope)... Jan 24 00:59:48.622989 systemd[1]: Reloading... Jan 24 00:59:48.697000 zram_generator::config[1851]: No configuration found. Jan 24 00:59:48.822993 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:59:48.891306 systemd[1]: Reloading finished in 267 ms. Jan 24 00:59:48.945050 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 00:59:48.945193 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 00:59:48.945584 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:59:48.948636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:59:49.106646 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:59:49.112673 (kubelet)[1912]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:59:49.168648 kubelet[1912]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:59:49.168648 kubelet[1912]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:59:49.168648 kubelet[1912]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:59:49.168648 kubelet[1912]: I0124 00:59:49.168599 1912 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:59:49.338026 kubelet[1912]: I0124 00:59:49.337953 1912 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:59:49.338026 kubelet[1912]: I0124 00:59:49.338012 1912 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:59:49.338297 kubelet[1912]: I0124 00:59:49.338239 1912 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:59:49.363614 kubelet[1912]: I0124 00:59:49.363589 1912 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:59:49.372380 kubelet[1912]: E0124 00:59:49.372315 1912 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:59:49.372380 kubelet[1912]: I0124 00:59:49.372363 1912 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:59:49.381237 kubelet[1912]: I0124 00:59:49.381177 1912 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:59:49.382109 kubelet[1912]: I0124 00:59:49.382010 1912 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:59:49.382285 kubelet[1912]: I0124 00:59:49.382070 1912 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.161","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 24 00:59:49.383046 kubelet[1912]: I0124 00:59:49.382963 1912 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:59:49.383046 kubelet[1912]: I0124 00:59:49.383009 1912 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:59:49.383205 kubelet[1912]: I0124 00:59:49.383132 1912 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:59:49.387264 kubelet[1912]: I0124 00:59:49.387123 1912 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:59:49.387264 kubelet[1912]: I0124 00:59:49.387191 1912 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:59:49.387264 kubelet[1912]: I0124 00:59:49.387210 1912 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:59:49.387415 kubelet[1912]: I0124 00:59:49.387274 1912 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:59:49.387415 kubelet[1912]: E0124 00:59:49.387362 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:59:49.387415 kubelet[1912]: E0124 00:59:49.387402 1912 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:59:49.391172 kubelet[1912]: I0124 00:59:49.391151 1912 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:59:49.392155 kubelet[1912]: I0124 00:59:49.391992 1912 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:59:49.392155 kubelet[1912]: W0124 00:59:49.392077 1912 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:59:49.395584 kubelet[1912]: I0124 00:59:49.395466 1912 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:59:49.395626 kubelet[1912]: I0124 00:59:49.395597 1912 server.go:1287] "Started kubelet" Jan 24 00:59:49.397148 kubelet[1912]: I0124 00:59:49.396999 1912 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:59:49.397366 kubelet[1912]: I0124 00:59:49.397285 1912 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:59:49.397514 kubelet[1912]: I0124 00:59:49.397497 1912 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:59:49.397939 kubelet[1912]: I0124 00:59:49.395667 1912 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:59:49.399291 kubelet[1912]: I0124 00:59:49.398653 1912 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:59:49.399291 kubelet[1912]: I0124 00:59:49.398883 1912 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:59:49.401246 kubelet[1912]: I0124 00:59:49.401170 1912 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:59:49.401426 kubelet[1912]: E0124 00:59:49.401375 1912 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.161\" not found" Jan 24 00:59:49.401822 kubelet[1912]: I0124 00:59:49.401664 1912 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:59:49.401955 kubelet[1912]: I0124 00:59:49.401917 1912 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:59:49.403970 kubelet[1912]: I0124 00:59:49.403919 1912 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:59:49.404062 kubelet[1912]: I0124 00:59:49.404010 1912 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:59:49.404670 kubelet[1912]: E0124 00:59:49.404631 1912 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:59:49.406185 kubelet[1912]: I0124 00:59:49.406033 1912 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:59:49.407635 kubelet[1912]: E0124 00:59:49.407577 1912 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.161\" not found" node="10.0.0.161" Jan 24 00:59:49.430102 kubelet[1912]: I0124 00:59:49.429910 1912 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:59:49.430102 kubelet[1912]: I0124 00:59:49.429958 1912 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:59:49.430102 kubelet[1912]: I0124 00:59:49.429976 1912 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:59:49.502265 kubelet[1912]: E0124 00:59:49.502070 1912 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.161\" not found" Jan 24 00:59:49.502824 kubelet[1912]: I0124 00:59:49.502652 1912 policy_none.go:49] "None policy: Start" Jan 24 00:59:49.502824 kubelet[1912]: I0124 00:59:49.502833 1912 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:59:49.502946 kubelet[1912]: I0124 00:59:49.502848 1912 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:59:49.509641 kubelet[1912]: I0124 00:59:49.509536 1912 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:59:49.509839 kubelet[1912]: I0124 00:59:49.509816 1912 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:59:49.509867 kubelet[1912]: I0124 00:59:49.509831 1912 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:59:49.510985 kubelet[1912]: I0124 00:59:49.510951 1912 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:59:49.513508 kubelet[1912]: E0124 00:59:49.513438 1912 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:59:49.513613 kubelet[1912]: E0124 00:59:49.513539 1912 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.161\" not found" Jan 24 00:59:49.539314 kubelet[1912]: I0124 00:59:49.539161 1912 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:59:49.541634 kubelet[1912]: I0124 00:59:49.541583 1912 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:59:49.541828 kubelet[1912]: I0124 00:59:49.541788 1912 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:59:49.541864 kubelet[1912]: I0124 00:59:49.541848 1912 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:59:49.541864 kubelet[1912]: I0124 00:59:49.541856 1912 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:59:49.543023 kubelet[1912]: E0124 00:59:49.541951 1912 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 24 00:59:49.614836 kubelet[1912]: I0124 00:59:49.614662 1912 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.161" Jan 24 00:59:49.620054 kubelet[1912]: I0124 00:59:49.619947 1912 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.161" Jan 24 00:59:49.620054 kubelet[1912]: E0124 00:59:49.620007 1912 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.0.161\": node \"10.0.0.161\" not found" Jan 24 00:59:49.625864 kubelet[1912]: I0124 00:59:49.625601 1912 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 24 00:59:49.626254 containerd[1583]: time="2026-01-24T00:59:49.626192017Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:59:49.627052 kubelet[1912]: I0124 00:59:49.626571 1912 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 24 00:59:49.645378 sudo[1767]: pam_unix(sudo:session): session closed for user root Jan 24 00:59:49.647314 sshd[1760]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:49.651544 systemd[1]: sshd@6-10.0.0.161:22-10.0.0.1:35616.service: Deactivated successfully. Jan 24 00:59:49.654236 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:59:49.654361 systemd-logind[1559]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:59:49.656152 systemd-logind[1559]: Removed session 7. Jan 24 00:59:50.340162 kubelet[1912]: I0124 00:59:50.339978 1912 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 24 00:59:50.341020 kubelet[1912]: W0124 00:59:50.340176 1912 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 24 00:59:50.341020 kubelet[1912]: W0124 00:59:50.340206 1912 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 24 00:59:50.341020 kubelet[1912]: W0124 00:59:50.340209 1912 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 24 00:59:50.388470 kubelet[1912]: I0124 00:59:50.388355 1912 apiserver.go:52] "Watching apiserver" Jan 24 00:59:50.388651 kubelet[1912]: E0124 00:59:50.388582 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:59:50.403875 kubelet[1912]: I0124 00:59:50.403828 1912 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:59:50.404775 kubelet[1912]: I0124 00:59:50.404621 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-hostproc\") pod \"cilium-pfxzx\" (UID: \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\") " pod="kube-system/cilium-pfxzx" Jan 24 00:59:50.404958 kubelet[1912]: I0124 00:59:50.404870 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-xtables-lock\") pod \"cilium-pfxzx\" (UID: \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\") " pod="kube-system/cilium-pfxzx" Jan 24 00:59:50.404958 kubelet[1912]: I0124 00:59:50.404952 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-hubble-tls\") pod \"cilium-pfxzx\" (UID: \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\") " pod="kube-system/cilium-pfxzx" Jan 24 00:59:50.405004 kubelet[1912]: I0124 00:59:50.404980 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d0a96ed5-5772-4e85-a08a-1edfa6bff76c-kube-proxy\") pod \"kube-proxy-577kn\" (UID: \"d0a96ed5-5772-4e85-a08a-1edfa6bff76c\") " pod="kube-system/kube-proxy-577kn" Jan 24 00:59:50.405996 kubelet[1912]: I0124 00:59:50.405007 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9gsb\" (UniqueName: \"kubernetes.io/projected/d0a96ed5-5772-4e85-a08a-1edfa6bff76c-kube-api-access-k9gsb\") pod \"kube-proxy-577kn\" (UID: \"d0a96ed5-5772-4e85-a08a-1edfa6bff76c\") " pod="kube-system/kube-proxy-577kn" Jan 24 00:59:50.405996 kubelet[1912]: I0124 00:59:50.405428 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-cilium-cgroup\") pod \"cilium-pfxzx\" (UID: \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\") " pod="kube-system/cilium-pfxzx" Jan 24 00:59:50.405996 kubelet[1912]: I0124 00:59:50.405465 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-lib-modules\") pod \"cilium-pfxzx\" (UID: \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\") " pod="kube-system/cilium-pfxzx" Jan 24 00:59:50.405996 kubelet[1912]: I0124 00:59:50.405497 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-clustermesh-secrets\") pod \"cilium-pfxzx\" (UID: \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\") " pod="kube-system/cilium-pfxzx" Jan 24 00:59:50.405996 kubelet[1912]: I0124 00:59:50.405526 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-host-proc-sys-net\") pod \"cilium-pfxzx\" (UID: \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\") " pod="kube-system/cilium-pfxzx" Jan 24 00:59:50.406222 kubelet[1912]: I0124 00:59:50.405872 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-host-proc-sys-kernel\") pod \"cilium-pfxzx\" (UID: \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\") " pod="kube-system/cilium-pfxzx" Jan 24 00:59:50.406222 kubelet[1912]: I0124 00:59:50.405918 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-cilium-run\") pod \"cilium-pfxzx\" (UID: \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\") " pod="kube-system/cilium-pfxzx" Jan 24 00:59:50.406222 kubelet[1912]: I0124 00:59:50.405951 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-cni-path\") pod \"cilium-pfxzx\" (UID: \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\") " pod="kube-system/cilium-pfxzx" Jan 24 00:59:50.406222 kubelet[1912]: I0124 00:59:50.405977 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-etc-cni-netd\") pod \"cilium-pfxzx\" (UID: \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\") " pod="kube-system/cilium-pfxzx" Jan 24 00:59:50.406222 kubelet[1912]: I0124 00:59:50.406008 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-cilium-config-path\") pod \"cilium-pfxzx\" (UID: \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\") " pod="kube-system/cilium-pfxzx" Jan 24 00:59:50.406222 kubelet[1912]: I0124 00:59:50.406035 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0a96ed5-5772-4e85-a08a-1edfa6bff76c-xtables-lock\") pod \"kube-proxy-577kn\" (UID: \"d0a96ed5-5772-4e85-a08a-1edfa6bff76c\") " pod="kube-system/kube-proxy-577kn" Jan 24 00:59:50.406381 kubelet[1912]: I0124 00:59:50.406061 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-bpf-maps\") pod \"cilium-pfxzx\" (UID: \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\") " pod="kube-system/cilium-pfxzx" Jan 24 00:59:50.406381 kubelet[1912]: I0124 00:59:50.406088 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nb6c\" (UniqueName: \"kubernetes.io/projected/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-kube-api-access-4nb6c\") pod \"cilium-pfxzx\" (UID: \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\") " pod="kube-system/cilium-pfxzx" Jan 24 00:59:50.406381 kubelet[1912]: I0124 00:59:50.406120 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0a96ed5-5772-4e85-a08a-1edfa6bff76c-lib-modules\") pod \"kube-proxy-577kn\" (UID: \"d0a96ed5-5772-4e85-a08a-1edfa6bff76c\") " pod="kube-system/kube-proxy-577kn" Jan 24 00:59:50.697158 kubelet[1912]: E0124 00:59:50.697018 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:50.698091 kubelet[1912]: E0124 00:59:50.697642 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:50.698220 containerd[1583]: time="2026-01-24T00:59:50.697871276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-577kn,Uid:d0a96ed5-5772-4e85-a08a-1edfa6bff76c,Namespace:kube-system,Attempt:0,}" Jan 24 00:59:50.698220 containerd[1583]: time="2026-01-24T00:59:50.698149200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pfxzx,Uid:c44e0029-7d4a-4eb8-b11d-a6de9866e4ee,Namespace:kube-system,Attempt:0,}" Jan 24 00:59:51.203404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2518467120.mount: Deactivated successfully. Jan 24 00:59:51.212920 containerd[1583]: time="2026-01-24T00:59:51.212795918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:59:51.213800 containerd[1583]: time="2026-01-24T00:59:51.213625907Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 24 00:59:51.216232 containerd[1583]: time="2026-01-24T00:59:51.216167091Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:59:51.217977 containerd[1583]: time="2026-01-24T00:59:51.217893343Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:59:51.218774 containerd[1583]: time="2026-01-24T00:59:51.218605480Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:59:51.220846 containerd[1583]: time="2026-01-24T00:59:51.220648113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:59:51.221347 containerd[1583]: time="2026-01-24T00:59:51.221299847Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 523.046773ms" Jan 24 00:59:51.223590 containerd[1583]: time="2026-01-24T00:59:51.223519933Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 525.554772ms" Jan 24 00:59:51.319604 containerd[1583]: time="2026-01-24T00:59:51.319050486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:59:51.320020 containerd[1583]: time="2026-01-24T00:59:51.319783594Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:59:51.320020 containerd[1583]: time="2026-01-24T00:59:51.319801668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:59:51.320020 containerd[1583]: time="2026-01-24T00:59:51.319887368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:59:51.320020 containerd[1583]: time="2026-01-24T00:59:51.319930522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:59:51.320173 containerd[1583]: time="2026-01-24T00:59:51.320015150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:59:51.320173 containerd[1583]: time="2026-01-24T00:59:51.320053631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:59:51.320242 containerd[1583]: time="2026-01-24T00:59:51.320150843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:59:51.389429 kubelet[1912]: E0124 00:59:51.389313 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:59:51.420174 containerd[1583]: time="2026-01-24T00:59:51.419448911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pfxzx,Uid:c44e0029-7d4a-4eb8-b11d-a6de9866e4ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"2db05ea4d70c6ead23483e9791a3eee5836bc73c917d29b3c6bdeec3c58350fa\"" Jan 24 00:59:51.423493 kubelet[1912]: E0124 00:59:51.422951 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:51.424185 containerd[1583]: time="2026-01-24T00:59:51.424046913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-577kn,Uid:d0a96ed5-5772-4e85-a08a-1edfa6bff76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ace5140e0d4e2d5cf0511614679ea3475a253e49856d637539f940af6c37bea3\"" Jan 24 00:59:51.424795 containerd[1583]: time="2026-01-24T00:59:51.424773159Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 24 00:59:51.426211 kubelet[1912]: E0124 00:59:51.426134 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:52.389606 kubelet[1912]: E0124 00:59:52.389441 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:59:53.389959 kubelet[1912]: E0124 00:59:53.389857 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:59:54.390303 kubelet[1912]: E0124 00:59:54.390222 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:59:54.539614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2622601721.mount: Deactivated successfully. Jan 24 00:59:55.390972 kubelet[1912]: E0124 00:59:55.390935 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:59:56.246982 containerd[1583]: time="2026-01-24T00:59:56.246865150Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:56.247968 containerd[1583]: time="2026-01-24T00:59:56.247915879Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 24 00:59:56.249554 containerd[1583]: time="2026-01-24T00:59:56.249512338Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:56.251120 containerd[1583]: time="2026-01-24T00:59:56.251058654Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 4.826026723s" Jan 24 00:59:56.251120 containerd[1583]: time="2026-01-24T00:59:56.251111012Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 24 00:59:56.252999 containerd[1583]: time="2026-01-24T00:59:56.252659663Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 24 00:59:56.253998 containerd[1583]: time="2026-01-24T00:59:56.253937761Z" level=info msg="CreateContainer within sandbox \"2db05ea4d70c6ead23483e9791a3eee5836bc73c917d29b3c6bdeec3c58350fa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 24 00:59:56.270511 containerd[1583]: time="2026-01-24T00:59:56.270420494Z" level=info msg="CreateContainer within sandbox \"2db05ea4d70c6ead23483e9791a3eee5836bc73c917d29b3c6bdeec3c58350fa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f6daced93166329a0f926f886b4addfaff28fca3360c44c7f32dda8e3824ab13\"" Jan 24 00:59:56.271354 containerd[1583]: time="2026-01-24T00:59:56.271278064Z" level=info msg="StartContainer for \"f6daced93166329a0f926f886b4addfaff28fca3360c44c7f32dda8e3824ab13\"" Jan 24 00:59:56.338541 containerd[1583]: time="2026-01-24T00:59:56.338385572Z" level=info msg="StartContainer for \"f6daced93166329a0f926f886b4addfaff28fca3360c44c7f32dda8e3824ab13\" returns successfully" Jan 24 00:59:56.392016 kubelet[1912]: E0124 00:59:56.391932 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:59:56.561429 kubelet[1912]: E0124 00:59:56.561174 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:56.575338 containerd[1583]: time="2026-01-24T00:59:56.575243283Z" level=info msg="shim disconnected" id=f6daced93166329a0f926f886b4addfaff28fca3360c44c7f32dda8e3824ab13 namespace=k8s.io Jan 24 00:59:56.575338 containerd[1583]: time="2026-01-24T00:59:56.575334152Z" level=warning msg="cleaning up after shim disconnected" id=f6daced93166329a0f926f886b4addfaff28fca3360c44c7f32dda8e3824ab13 namespace=k8s.io Jan 24 00:59:56.575544 containerd[1583]: time="2026-01-24T00:59:56.575347287Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:59:57.265240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6daced93166329a0f926f886b4addfaff28fca3360c44c7f32dda8e3824ab13-rootfs.mount: Deactivated successfully. Jan 24 00:59:57.359430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount652541836.mount: Deactivated successfully. Jan 24 00:59:57.392396 kubelet[1912]: E0124 00:59:57.392347 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:59:57.565362 kubelet[1912]: E0124 00:59:57.565205 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:57.567633 containerd[1583]: time="2026-01-24T00:59:57.567564085Z" level=info msg="CreateContainer within sandbox \"2db05ea4d70c6ead23483e9791a3eee5836bc73c917d29b3c6bdeec3c58350fa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 24 00:59:57.591566 containerd[1583]: time="2026-01-24T00:59:57.591530674Z" level=info msg="CreateContainer within sandbox \"2db05ea4d70c6ead23483e9791a3eee5836bc73c917d29b3c6bdeec3c58350fa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f23c6993ab46450c199558f34c60542302ecc1922c28d3813560bf09acb68ce2\"" Jan 24 00:59:57.593085 containerd[1583]: time="2026-01-24T00:59:57.592930687Z" level=info msg="StartContainer for \"f23c6993ab46450c199558f34c60542302ecc1922c28d3813560bf09acb68ce2\"" Jan 24 00:59:57.663940 containerd[1583]: time="2026-01-24T00:59:57.663042703Z" level=info msg="StartContainer for \"f23c6993ab46450c199558f34c60542302ecc1922c28d3813560bf09acb68ce2\" returns successfully" Jan 24 00:59:57.674990 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:59:57.675261 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:59:57.675329 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:59:57.681740 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:59:57.702908 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:59:57.833650 containerd[1583]: time="2026-01-24T00:59:57.833479465Z" level=info msg="shim disconnected" id=f23c6993ab46450c199558f34c60542302ecc1922c28d3813560bf09acb68ce2 namespace=k8s.io Jan 24 00:59:57.833650 containerd[1583]: time="2026-01-24T00:59:57.833554115Z" level=warning msg="cleaning up after shim disconnected" id=f23c6993ab46450c199558f34c60542302ecc1922c28d3813560bf09acb68ce2 namespace=k8s.io Jan 24 00:59:57.833650 containerd[1583]: time="2026-01-24T00:59:57.833564204Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:59:57.853055 containerd[1583]: time="2026-01-24T00:59:57.852900546Z" level=warning msg="cleanup warnings time=\"2026-01-24T00:59:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 24 00:59:57.902117 containerd[1583]: time="2026-01-24T00:59:57.902045559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:57.902983 containerd[1583]: time="2026-01-24T00:59:57.902935140Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 24 00:59:57.904195 containerd[1583]: time="2026-01-24T00:59:57.904153063Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:57.906405 containerd[1583]: time="2026-01-24T00:59:57.906344854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:57.907168 containerd[1583]: time="2026-01-24T00:59:57.907113058Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.654337199s" Jan 24 00:59:57.907168 containerd[1583]: time="2026-01-24T00:59:57.907164013Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 24 00:59:57.909613 containerd[1583]: time="2026-01-24T00:59:57.909566628Z" level=info msg="CreateContainer within sandbox \"ace5140e0d4e2d5cf0511614679ea3475a253e49856d637539f940af6c37bea3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:59:57.925125 containerd[1583]: time="2026-01-24T00:59:57.925064031Z" level=info msg="CreateContainer within sandbox \"ace5140e0d4e2d5cf0511614679ea3475a253e49856d637539f940af6c37bea3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ec892aab8e0aee0f6ef4284e22a025a6bcb7089e45d7565cecaa8875d1ac3efc\"" Jan 24 00:59:57.925852 containerd[1583]: time="2026-01-24T00:59:57.925777937Z" level=info msg="StartContainer for \"ec892aab8e0aee0f6ef4284e22a025a6bcb7089e45d7565cecaa8875d1ac3efc\"" Jan 24 00:59:58.002161 containerd[1583]: time="2026-01-24T00:59:58.002008218Z" level=info msg="StartContainer for \"ec892aab8e0aee0f6ef4284e22a025a6bcb7089e45d7565cecaa8875d1ac3efc\" returns successfully" Jan 24 00:59:58.265632 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f23c6993ab46450c199558f34c60542302ecc1922c28d3813560bf09acb68ce2-rootfs.mount: Deactivated successfully. Jan 24 00:59:58.393176 kubelet[1912]: E0124 00:59:58.393103 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:59:58.569366 kubelet[1912]: E0124 00:59:58.569190 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:58.574124 kubelet[1912]: E0124 00:59:58.573251 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:58.576881 containerd[1583]: time="2026-01-24T00:59:58.575399462Z" level=info msg="CreateContainer within sandbox \"2db05ea4d70c6ead23483e9791a3eee5836bc73c917d29b3c6bdeec3c58350fa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 24 00:59:58.584917 kubelet[1912]: I0124 00:59:58.584498 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-577kn" podStartSLOduration=3.103512734 podStartE2EDuration="9.584481412s" podCreationTimestamp="2026-01-24 00:59:49 +0000 UTC" firstStartedPulling="2026-01-24 00:59:51.427024433 +0000 UTC m=+2.308889685" lastFinishedPulling="2026-01-24 00:59:57.907993121 +0000 UTC m=+8.789858363" observedRunningTime="2026-01-24 00:59:58.584085563 +0000 UTC m=+9.465950805" watchObservedRunningTime="2026-01-24 00:59:58.584481412 +0000 UTC m=+9.466346654" Jan 24 00:59:58.607765 containerd[1583]: time="2026-01-24T00:59:58.607541977Z" level=info msg="CreateContainer within sandbox \"2db05ea4d70c6ead23483e9791a3eee5836bc73c917d29b3c6bdeec3c58350fa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"428cf4b76f86f2c470a28271861424b0adc136111835a22a4cf0b7d7641395ac\"" Jan 24 00:59:58.608647 containerd[1583]: time="2026-01-24T00:59:58.608587367Z" level=info msg="StartContainer for \"428cf4b76f86f2c470a28271861424b0adc136111835a22a4cf0b7d7641395ac\"" Jan 24 00:59:58.689256 containerd[1583]: time="2026-01-24T00:59:58.689185867Z" level=info msg="StartContainer for \"428cf4b76f86f2c470a28271861424b0adc136111835a22a4cf0b7d7641395ac\" returns successfully" Jan 24 00:59:58.739905 containerd[1583]: time="2026-01-24T00:59:58.739748643Z" level=info msg="shim disconnected" id=428cf4b76f86f2c470a28271861424b0adc136111835a22a4cf0b7d7641395ac namespace=k8s.io Jan 24 00:59:58.739905 containerd[1583]: time="2026-01-24T00:59:58.739798456Z" level=warning msg="cleaning up after shim disconnected" id=428cf4b76f86f2c470a28271861424b0adc136111835a22a4cf0b7d7641395ac namespace=k8s.io Jan 24 00:59:58.739905 containerd[1583]: time="2026-01-24T00:59:58.739863327Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:59:59.264954 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-428cf4b76f86f2c470a28271861424b0adc136111835a22a4cf0b7d7641395ac-rootfs.mount: Deactivated successfully. Jan 24 00:59:59.393384 kubelet[1912]: E0124 00:59:59.393258 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:59:59.576949 kubelet[1912]: E0124 00:59:59.576784 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:59.577234 kubelet[1912]: E0124 00:59:59.577049 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:59.579305 containerd[1583]: time="2026-01-24T00:59:59.579147802Z" level=info msg="CreateContainer within sandbox \"2db05ea4d70c6ead23483e9791a3eee5836bc73c917d29b3c6bdeec3c58350fa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 24 00:59:59.599742 containerd[1583]: time="2026-01-24T00:59:59.599598469Z" level=info msg="CreateContainer within sandbox \"2db05ea4d70c6ead23483e9791a3eee5836bc73c917d29b3c6bdeec3c58350fa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"54cdfccffe651d24c20e3dbc7b51c502c6082eb8733413a2891134f74538651a\"" Jan 24 00:59:59.600528 containerd[1583]: time="2026-01-24T00:59:59.600462426Z" level=info msg="StartContainer for \"54cdfccffe651d24c20e3dbc7b51c502c6082eb8733413a2891134f74538651a\"" Jan 24 00:59:59.666474 containerd[1583]: time="2026-01-24T00:59:59.666343736Z" level=info msg="StartContainer for \"54cdfccffe651d24c20e3dbc7b51c502c6082eb8733413a2891134f74538651a\" returns successfully" Jan 24 00:59:59.695227 containerd[1583]: time="2026-01-24T00:59:59.695064665Z" level=info msg="shim disconnected" id=54cdfccffe651d24c20e3dbc7b51c502c6082eb8733413a2891134f74538651a namespace=k8s.io Jan 24 00:59:59.695227 containerd[1583]: time="2026-01-24T00:59:59.695136298Z" level=warning msg="cleaning up after shim disconnected" id=54cdfccffe651d24c20e3dbc7b51c502c6082eb8733413a2891134f74538651a namespace=k8s.io Jan 24 00:59:59.695227 containerd[1583]: time="2026-01-24T00:59:59.695145675Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 01:00:00.265188 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54cdfccffe651d24c20e3dbc7b51c502c6082eb8733413a2891134f74538651a-rootfs.mount: Deactivated successfully. Jan 24 01:00:00.393958 kubelet[1912]: E0124 01:00:00.393795 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:00.583293 kubelet[1912]: E0124 01:00:00.583047 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:00.586992 containerd[1583]: time="2026-01-24T01:00:00.586662941Z" level=info msg="CreateContainer within sandbox \"2db05ea4d70c6ead23483e9791a3eee5836bc73c917d29b3c6bdeec3c58350fa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 24 01:00:00.609105 containerd[1583]: time="2026-01-24T01:00:00.609001224Z" level=info msg="CreateContainer within sandbox \"2db05ea4d70c6ead23483e9791a3eee5836bc73c917d29b3c6bdeec3c58350fa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"988550680b06b009e6ba1340eef5ceea211da724eef24816085583106117c8f2\"" Jan 24 01:00:00.609722 containerd[1583]: time="2026-01-24T01:00:00.609583160Z" level=info msg="StartContainer for \"988550680b06b009e6ba1340eef5ceea211da724eef24816085583106117c8f2\"" Jan 24 01:00:00.682173 containerd[1583]: time="2026-01-24T01:00:00.682095558Z" level=info msg="StartContainer for \"988550680b06b009e6ba1340eef5ceea211da724eef24816085583106117c8f2\" returns successfully" Jan 24 01:00:00.798240 kubelet[1912]: I0124 01:00:00.798169 1912 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 01:00:01.184812 kernel: Initializing XFRM netlink socket Jan 24 01:00:01.394673 kubelet[1912]: E0124 01:00:01.394529 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:01.590287 kubelet[1912]: E0124 01:00:01.590130 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:02.395022 kubelet[1912]: E0124 01:00:02.394920 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:02.592450 kubelet[1912]: E0124 01:00:02.592351 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:02.902560 systemd-networkd[1252]: cilium_host: Link UP Jan 24 01:00:02.902810 systemd-networkd[1252]: cilium_net: Link UP Jan 24 01:00:02.903068 systemd-networkd[1252]: cilium_net: Gained carrier Jan 24 01:00:02.903313 systemd-networkd[1252]: cilium_host: Gained carrier Jan 24 01:00:03.044785 systemd-networkd[1252]: cilium_vxlan: Link UP Jan 24 01:00:03.044796 systemd-networkd[1252]: cilium_vxlan: Gained carrier Jan 24 01:00:03.276815 kernel: NET: Registered PF_ALG protocol family Jan 24 01:00:03.395765 kubelet[1912]: E0124 01:00:03.395614 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:03.596123 kubelet[1912]: E0124 01:00:03.596050 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:03.650056 systemd-networkd[1252]: cilium_net: Gained IPv6LL Jan 24 01:00:03.714074 systemd-networkd[1252]: cilium_host: Gained IPv6LL Jan 24 01:00:04.071078 systemd-networkd[1252]: lxc_health: Link UP Jan 24 01:00:04.079542 systemd-networkd[1252]: lxc_health: Gained carrier Jan 24 01:00:04.396547 kubelet[1912]: E0124 01:00:04.396409 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:04.546120 systemd-networkd[1252]: cilium_vxlan: Gained IPv6LL Jan 24 01:00:04.561426 kubelet[1912]: I0124 01:00:04.561359 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pfxzx" podStartSLOduration=10.733326922 podStartE2EDuration="15.561337112s" podCreationTimestamp="2026-01-24 00:59:49 +0000 UTC" firstStartedPulling="2026-01-24 00:59:51.424062066 +0000 UTC m=+2.305927308" lastFinishedPulling="2026-01-24 00:59:56.252072256 +0000 UTC m=+7.133937498" observedRunningTime="2026-01-24 01:00:01.608035088 +0000 UTC m=+12.489900350" watchObservedRunningTime="2026-01-24 01:00:04.561337112 +0000 UTC m=+15.443202354" Jan 24 01:00:04.599110 kubelet[1912]: E0124 01:00:04.598647 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:05.397445 kubelet[1912]: E0124 01:00:05.397315 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:05.826061 systemd-networkd[1252]: lxc_health: Gained IPv6LL Jan 24 01:00:05.828222 kubelet[1912]: I0124 01:00:05.828106 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kddpj\" (UniqueName: \"kubernetes.io/projected/7e30c58e-6e63-4025-9b23-b4deed0ff723-kube-api-access-kddpj\") pod \"nginx-deployment-7fcdb87857-2wjss\" (UID: \"7e30c58e-6e63-4025-9b23-b4deed0ff723\") " pod="default/nginx-deployment-7fcdb87857-2wjss" Jan 24 01:00:06.031954 containerd[1583]: time="2026-01-24T01:00:06.031788612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-2wjss,Uid:7e30c58e-6e63-4025-9b23-b4deed0ff723,Namespace:default,Attempt:0,}" Jan 24 01:00:06.078650 systemd-networkd[1252]: lxcf7135b3cb062: Link UP Jan 24 01:00:06.088801 kernel: eth0: renamed from tmpce127 Jan 24 01:00:06.093391 systemd-networkd[1252]: lxcf7135b3cb062: Gained carrier Jan 24 01:00:06.398656 kubelet[1912]: E0124 01:00:06.398557 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:07.399155 kubelet[1912]: E0124 01:00:07.399040 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:08.066088 systemd-networkd[1252]: lxcf7135b3cb062: Gained IPv6LL Jan 24 01:00:08.399822 kubelet[1912]: E0124 01:00:08.399769 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:08.511484 containerd[1583]: time="2026-01-24T01:00:08.511314157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 01:00:08.511484 containerd[1583]: time="2026-01-24T01:00:08.511385971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 01:00:08.511484 containerd[1583]: time="2026-01-24T01:00:08.511399646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:00:08.512070 containerd[1583]: time="2026-01-24T01:00:08.511510203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:00:08.547739 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 01:00:08.576765 containerd[1583]: time="2026-01-24T01:00:08.576609237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-2wjss,Uid:7e30c58e-6e63-4025-9b23-b4deed0ff723,Namespace:default,Attempt:0,} returns sandbox id \"ce127afdd429d8717c6401af235916b244ae75016e58d42a6c9b2cd7e647978e\"" Jan 24 01:00:08.578231 containerd[1583]: time="2026-01-24T01:00:08.578030285Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 24 01:00:09.387644 kubelet[1912]: E0124 01:00:09.387460 1912 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:09.401128 kubelet[1912]: E0124 01:00:09.401044 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:10.401894 kubelet[1912]: E0124 01:00:10.401783 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:11.402869 kubelet[1912]: E0124 01:00:11.402760 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:12.403893 kubelet[1912]: E0124 01:00:12.403772 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:13.405132 kubelet[1912]: E0124 01:00:13.405020 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:14.405523 kubelet[1912]: E0124 01:00:14.405408 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:14.584554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2059354387.mount: Deactivated successfully. Jan 24 01:00:15.406374 kubelet[1912]: E0124 01:00:15.406235 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:16.407195 kubelet[1912]: E0124 01:00:16.407058 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:17.226740 kubelet[1912]: I0124 01:00:17.226479 1912 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 01:00:17.227170 kubelet[1912]: E0124 01:00:17.227130 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:17.408189 kubelet[1912]: E0124 01:00:17.408050 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:17.627979 kubelet[1912]: E0124 01:00:17.627855 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:18.408427 kubelet[1912]: E0124 01:00:18.408244 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:18.429395 containerd[1583]: time="2026-01-24T01:00:18.429296960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:00:18.430436 containerd[1583]: time="2026-01-24T01:00:18.430335094Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=63836480" Jan 24 01:00:18.431360 containerd[1583]: time="2026-01-24T01:00:18.431283705Z" level=info msg="ImageCreate event name:\"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:00:18.434885 containerd[1583]: time="2026-01-24T01:00:18.434818894Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:00:18.436168 containerd[1583]: time="2026-01-24T01:00:18.436111034Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 9.858052556s" Jan 24 01:00:18.436215 containerd[1583]: time="2026-01-24T01:00:18.436165965Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 24 01:00:18.439065 containerd[1583]: time="2026-01-24T01:00:18.439019245Z" level=info msg="CreateContainer within sandbox \"ce127afdd429d8717c6401af235916b244ae75016e58d42a6c9b2cd7e647978e\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 24 01:00:18.452956 containerd[1583]: time="2026-01-24T01:00:18.452766451Z" level=info msg="CreateContainer within sandbox \"ce127afdd429d8717c6401af235916b244ae75016e58d42a6c9b2cd7e647978e\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"6cd98e9aeb895bab29269dc293bff6de67affdf6db5bd3b8ef3da304d87ce4de\"" Jan 24 01:00:18.453542 containerd[1583]: time="2026-01-24T01:00:18.453273147Z" level=info msg="StartContainer for \"6cd98e9aeb895bab29269dc293bff6de67affdf6db5bd3b8ef3da304d87ce4de\"" Jan 24 01:00:18.520839 containerd[1583]: time="2026-01-24T01:00:18.520786396Z" level=info msg="StartContainer for \"6cd98e9aeb895bab29269dc293bff6de67affdf6db5bd3b8ef3da304d87ce4de\" returns successfully" Jan 24 01:00:18.641212 kubelet[1912]: I0124 01:00:18.640998 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-2wjss" podStartSLOduration=3.7810841440000003 podStartE2EDuration="13.640985348s" podCreationTimestamp="2026-01-24 01:00:05 +0000 UTC" firstStartedPulling="2026-01-24 01:00:08.577629772 +0000 UTC m=+19.459495014" lastFinishedPulling="2026-01-24 01:00:18.437530975 +0000 UTC m=+29.319396218" observedRunningTime="2026-01-24 01:00:18.64097855 +0000 UTC m=+29.522843802" watchObservedRunningTime="2026-01-24 01:00:18.640985348 +0000 UTC m=+29.522850590" Jan 24 01:00:19.409332 kubelet[1912]: E0124 01:00:19.409214 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:20.409952 kubelet[1912]: E0124 01:00:20.409810 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:21.411078 kubelet[1912]: E0124 01:00:21.410774 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:22.258318 kubelet[1912]: I0124 01:00:22.258228 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/392a007b-7d2f-4375-bb1e-47448efde533-data\") pod \"nfs-server-provisioner-0\" (UID: \"392a007b-7d2f-4375-bb1e-47448efde533\") " pod="default/nfs-server-provisioner-0" Jan 24 01:00:22.258318 kubelet[1912]: I0124 01:00:22.258311 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw8nm\" (UniqueName: \"kubernetes.io/projected/392a007b-7d2f-4375-bb1e-47448efde533-kube-api-access-zw8nm\") pod \"nfs-server-provisioner-0\" (UID: \"392a007b-7d2f-4375-bb1e-47448efde533\") " pod="default/nfs-server-provisioner-0" Jan 24 01:00:22.411419 kubelet[1912]: E0124 01:00:22.411320 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:22.524205 containerd[1583]: time="2026-01-24T01:00:22.524018107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:392a007b-7d2f-4375-bb1e-47448efde533,Namespace:default,Attempt:0,}" Jan 24 01:00:22.567264 systemd-networkd[1252]: lxcb9bd6cfdbe4a: Link UP Jan 24 01:00:22.575768 kernel: eth0: renamed from tmp586ab Jan 24 01:00:22.580150 systemd-networkd[1252]: lxcb9bd6cfdbe4a: Gained carrier Jan 24 01:00:22.804273 containerd[1583]: time="2026-01-24T01:00:22.803956535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 01:00:22.804273 containerd[1583]: time="2026-01-24T01:00:22.804004694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 01:00:22.804273 containerd[1583]: time="2026-01-24T01:00:22.804018129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:00:22.805420 containerd[1583]: time="2026-01-24T01:00:22.805303675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:00:22.831419 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 01:00:22.863867 containerd[1583]: time="2026-01-24T01:00:22.863814032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:392a007b-7d2f-4375-bb1e-47448efde533,Namespace:default,Attempt:0,} returns sandbox id \"586ab19de3cc1002551b2973e15fe71a1cc1bdae192cc1a8ba150a6879f4121e\"" Jan 24 01:00:22.866107 containerd[1583]: time="2026-01-24T01:00:22.866057297Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 24 01:00:23.412763 kubelet[1912]: E0124 01:00:23.412551 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:23.874173 systemd-networkd[1252]: lxcb9bd6cfdbe4a: Gained IPv6LL Jan 24 01:00:24.413608 kubelet[1912]: E0124 01:00:24.413429 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:24.726664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2494080119.mount: Deactivated successfully. Jan 24 01:00:25.415168 kubelet[1912]: E0124 01:00:25.415133 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:26.416068 kubelet[1912]: E0124 01:00:26.415990 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:26.729796 containerd[1583]: time="2026-01-24T01:00:26.729537553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:00:26.730807 containerd[1583]: time="2026-01-24T01:00:26.730740204Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 24 01:00:26.731796 containerd[1583]: time="2026-01-24T01:00:26.731750448Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:00:26.735303 containerd[1583]: time="2026-01-24T01:00:26.735160205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:00:26.736460 containerd[1583]: time="2026-01-24T01:00:26.736383001Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 3.870262477s" Jan 24 01:00:26.736460 containerd[1583]: time="2026-01-24T01:00:26.736440748Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 24 01:00:26.739054 containerd[1583]: time="2026-01-24T01:00:26.739007630Z" level=info msg="CreateContainer within sandbox \"586ab19de3cc1002551b2973e15fe71a1cc1bdae192cc1a8ba150a6879f4121e\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 24 01:00:26.753037 containerd[1583]: time="2026-01-24T01:00:26.752965733Z" level=info msg="CreateContainer within sandbox \"586ab19de3cc1002551b2973e15fe71a1cc1bdae192cc1a8ba150a6879f4121e\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"77a92904a4874e0a31167c1691221c4f2781e179f605cde64f734dc8b5b99f57\"" Jan 24 01:00:26.753621 containerd[1583]: time="2026-01-24T01:00:26.753566931Z" level=info msg="StartContainer for \"77a92904a4874e0a31167c1691221c4f2781e179f605cde64f734dc8b5b99f57\"" Jan 24 01:00:26.869241 containerd[1583]: time="2026-01-24T01:00:26.869200475Z" level=info msg="StartContainer for \"77a92904a4874e0a31167c1691221c4f2781e179f605cde64f734dc8b5b99f57\" returns successfully" Jan 24 01:00:27.416995 kubelet[1912]: E0124 01:00:27.416786 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:27.666363 kubelet[1912]: I0124 01:00:27.666136 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.794120692 podStartE2EDuration="5.666124433s" podCreationTimestamp="2026-01-24 01:00:22 +0000 UTC" firstStartedPulling="2026-01-24 01:00:22.865343249 +0000 UTC m=+33.747208491" lastFinishedPulling="2026-01-24 01:00:26.73734699 +0000 UTC m=+37.619212232" observedRunningTime="2026-01-24 01:00:27.665909317 +0000 UTC m=+38.547774569" watchObservedRunningTime="2026-01-24 01:00:27.666124433 +0000 UTC m=+38.547989676" Jan 24 01:00:28.418208 kubelet[1912]: E0124 01:00:28.418040 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:28.706361 update_engine[1561]: I20260124 01:00:28.706122 1561 update_attempter.cc:509] Updating boot flags... Jan 24 01:00:28.742760 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (3303) Jan 24 01:00:28.785850 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (3305) Jan 24 01:00:29.387986 kubelet[1912]: E0124 01:00:29.387787 1912 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:29.418660 kubelet[1912]: E0124 01:00:29.418553 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:30.419659 kubelet[1912]: E0124 01:00:30.419487 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:31.420609 kubelet[1912]: E0124 01:00:31.420383 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:32.420908 kubelet[1912]: E0124 01:00:32.420674 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:33.421668 kubelet[1912]: E0124 01:00:33.421553 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:34.421983 kubelet[1912]: E0124 01:00:34.421817 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:35.423146 kubelet[1912]: E0124 01:00:35.423047 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:36.423976 kubelet[1912]: E0124 01:00:36.423654 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:36.573832 kubelet[1912]: I0124 01:00:36.573772 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dade7113-c4eb-4778-bc98-467050b02896\" (UniqueName: \"kubernetes.io/nfs/cd490a7a-4fa2-4866-9171-b46c9e3d6daa-pvc-dade7113-c4eb-4778-bc98-467050b02896\") pod \"test-pod-1\" (UID: \"cd490a7a-4fa2-4866-9171-b46c9e3d6daa\") " pod="default/test-pod-1" Jan 24 01:00:36.573832 kubelet[1912]: I0124 01:00:36.573838 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j5st\" (UniqueName: \"kubernetes.io/projected/cd490a7a-4fa2-4866-9171-b46c9e3d6daa-kube-api-access-7j5st\") pod \"test-pod-1\" (UID: \"cd490a7a-4fa2-4866-9171-b46c9e3d6daa\") " pod="default/test-pod-1" Jan 24 01:00:36.706748 kernel: FS-Cache: Loaded Jan 24 01:00:36.786760 kernel: RPC: Registered named UNIX socket transport module. Jan 24 01:00:36.786936 kernel: RPC: Registered udp transport module. Jan 24 01:00:36.786964 kernel: RPC: Registered tcp transport module. Jan 24 01:00:36.788295 kernel: RPC: Registered tcp-with-tls transport module. Jan 24 01:00:36.790083 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 24 01:00:37.046242 kernel: NFS: Registering the id_resolver key type Jan 24 01:00:37.046363 kernel: Key type id_resolver registered Jan 24 01:00:37.046388 kernel: Key type id_legacy registered Jan 24 01:00:37.079669 nfsidmap[3329]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 24 01:00:37.086251 nfsidmap[3332]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 24 01:00:37.124304 containerd[1583]: time="2026-01-24T01:00:37.124264446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:cd490a7a-4fa2-4866-9171-b46c9e3d6daa,Namespace:default,Attempt:0,}" Jan 24 01:00:37.153845 systemd-networkd[1252]: lxc26815bfd3755: Link UP Jan 24 01:00:37.162754 kernel: eth0: renamed from tmp900ae Jan 24 01:00:37.171355 systemd-networkd[1252]: lxc26815bfd3755: Gained carrier Jan 24 01:00:37.355202 containerd[1583]: time="2026-01-24T01:00:37.354828600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 01:00:37.355202 containerd[1583]: time="2026-01-24T01:00:37.354920601Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 01:00:37.355202 containerd[1583]: time="2026-01-24T01:00:37.354931422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:00:37.355202 containerd[1583]: time="2026-01-24T01:00:37.355078925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:00:37.387128 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 01:00:37.415734 containerd[1583]: time="2026-01-24T01:00:37.415638917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:cd490a7a-4fa2-4866-9171-b46c9e3d6daa,Namespace:default,Attempt:0,} returns sandbox id \"900ae9d182196320b1cd19454571815c122d3184ca08122da6da7d5ee97504fa\"" Jan 24 01:00:37.416903 containerd[1583]: time="2026-01-24T01:00:37.416789136Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 24 01:00:37.424241 kubelet[1912]: E0124 01:00:37.424188 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:37.510021 containerd[1583]: time="2026-01-24T01:00:37.509953275Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:00:37.511047 containerd[1583]: time="2026-01-24T01:00:37.510956387Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 24 01:00:37.513929 containerd[1583]: time="2026-01-24T01:00:37.513833337Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 96.994327ms" Jan 24 01:00:37.513929 containerd[1583]: time="2026-01-24T01:00:37.513915641Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 24 01:00:37.516239 containerd[1583]: time="2026-01-24T01:00:37.516157136Z" level=info msg="CreateContainer within sandbox \"900ae9d182196320b1cd19454571815c122d3184ca08122da6da7d5ee97504fa\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 24 01:00:37.530249 containerd[1583]: time="2026-01-24T01:00:37.530189502Z" level=info msg="CreateContainer within sandbox \"900ae9d182196320b1cd19454571815c122d3184ca08122da6da7d5ee97504fa\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"d0afe5dad998b910e39dac32b7d919f8f8cd42a35244468eae9bbd6e8d90153f\"" Jan 24 01:00:37.531081 containerd[1583]: time="2026-01-24T01:00:37.531015210Z" level=info msg="StartContainer for \"d0afe5dad998b910e39dac32b7d919f8f8cd42a35244468eae9bbd6e8d90153f\"" Jan 24 01:00:37.584957 containerd[1583]: time="2026-01-24T01:00:37.584904597Z" level=info msg="StartContainer for \"d0afe5dad998b910e39dac32b7d919f8f8cd42a35244468eae9bbd6e8d90153f\" returns successfully" Jan 24 01:00:37.685026 kubelet[1912]: I0124 01:00:37.684943 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.586730504 podStartE2EDuration="15.684921979s" podCreationTimestamp="2026-01-24 01:00:22 +0000 UTC" firstStartedPulling="2026-01-24 01:00:37.416475177 +0000 UTC m=+48.298340419" lastFinishedPulling="2026-01-24 01:00:37.514666652 +0000 UTC m=+48.396531894" observedRunningTime="2026-01-24 01:00:37.684569547 +0000 UTC m=+48.566434789" watchObservedRunningTime="2026-01-24 01:00:37.684921979 +0000 UTC m=+48.566787220" Jan 24 01:00:38.424905 kubelet[1912]: E0124 01:00:38.424781 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:38.978191 systemd-networkd[1252]: lxc26815bfd3755: Gained IPv6LL Jan 24 01:00:39.425114 kubelet[1912]: E0124 01:00:39.424999 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:39.796413 containerd[1583]: time="2026-01-24T01:00:39.796207697Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 01:00:39.806033 containerd[1583]: time="2026-01-24T01:00:39.805979048Z" level=info msg="StopContainer for \"988550680b06b009e6ba1340eef5ceea211da724eef24816085583106117c8f2\" with timeout 2 (s)" Jan 24 01:00:39.806397 containerd[1583]: time="2026-01-24T01:00:39.806349537Z" level=info msg="Stop container \"988550680b06b009e6ba1340eef5ceea211da724eef24816085583106117c8f2\" with signal terminated" Jan 24 01:00:39.814910 systemd-networkd[1252]: lxc_health: Link DOWN Jan 24 01:00:39.814920 systemd-networkd[1252]: lxc_health: Lost carrier Jan 24 01:00:39.869130 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-988550680b06b009e6ba1340eef5ceea211da724eef24816085583106117c8f2-rootfs.mount: Deactivated successfully. Jan 24 01:00:39.881408 containerd[1583]: time="2026-01-24T01:00:39.881341151Z" level=info msg="shim disconnected" id=988550680b06b009e6ba1340eef5ceea211da724eef24816085583106117c8f2 namespace=k8s.io Jan 24 01:00:39.881408 containerd[1583]: time="2026-01-24T01:00:39.881403797Z" level=warning msg="cleaning up after shim disconnected" id=988550680b06b009e6ba1340eef5ceea211da724eef24816085583106117c8f2 namespace=k8s.io Jan 24 01:00:39.881408 containerd[1583]: time="2026-01-24T01:00:39.881412875Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 01:00:39.898613 containerd[1583]: time="2026-01-24T01:00:39.898539420Z" level=info msg="StopContainer for \"988550680b06b009e6ba1340eef5ceea211da724eef24816085583106117c8f2\" returns successfully" Jan 24 01:00:39.899551 containerd[1583]: time="2026-01-24T01:00:39.899344890Z" level=info msg="StopPodSandbox for \"2db05ea4d70c6ead23483e9791a3eee5836bc73c917d29b3c6bdeec3c58350fa\"" Jan 24 01:00:39.899551 containerd[1583]: time="2026-01-24T01:00:39.899374485Z" level=info msg="Container to stop \"428cf4b76f86f2c470a28271861424b0adc136111835a22a4cf0b7d7641395ac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 01:00:39.899551 containerd[1583]: time="2026-01-24T01:00:39.899388551Z" level=info msg="Container to stop \"988550680b06b009e6ba1340eef5ceea211da724eef24816085583106117c8f2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 01:00:39.899551 containerd[1583]: time="2026-01-24T01:00:39.899398560Z" level=info msg="Container to stop \"54cdfccffe651d24c20e3dbc7b51c502c6082eb8733413a2891134f74538651a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 01:00:39.899551 containerd[1583]: time="2026-01-24T01:00:39.899407396Z" level=info msg="Container to stop \"f6daced93166329a0f926f886b4addfaff28fca3360c44c7f32dda8e3824ab13\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 01:00:39.899551 containerd[1583]: time="2026-01-24T01:00:39.899415691Z" level=info msg="Container to stop \"f23c6993ab46450c199558f34c60542302ecc1922c28d3813560bf09acb68ce2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 01:00:39.901669 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2db05ea4d70c6ead23483e9791a3eee5836bc73c917d29b3c6bdeec3c58350fa-shm.mount: Deactivated successfully. Jan 24 01:00:39.927823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2db05ea4d70c6ead23483e9791a3eee5836bc73c917d29b3c6bdeec3c58350fa-rootfs.mount: Deactivated successfully. Jan 24 01:00:39.932465 containerd[1583]: time="2026-01-24T01:00:39.932417607Z" level=info msg="shim disconnected" id=2db05ea4d70c6ead23483e9791a3eee5836bc73c917d29b3c6bdeec3c58350fa namespace=k8s.io Jan 24 01:00:39.932465 containerd[1583]: time="2026-01-24T01:00:39.932458624Z" level=warning msg="cleaning up after shim disconnected" id=2db05ea4d70c6ead23483e9791a3eee5836bc73c917d29b3c6bdeec3c58350fa namespace=k8s.io Jan 24 01:00:39.932465 containerd[1583]: time="2026-01-24T01:00:39.932467851Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 01:00:39.947966 containerd[1583]: time="2026-01-24T01:00:39.947917201Z" level=info msg="TearDown network for sandbox \"2db05ea4d70c6ead23483e9791a3eee5836bc73c917d29b3c6bdeec3c58350fa\" successfully" Jan 24 01:00:39.947966 containerd[1583]: time="2026-01-24T01:00:39.947955843Z" level=info msg="StopPodSandbox for \"2db05ea4d70c6ead23483e9791a3eee5836bc73c917d29b3c6bdeec3c58350fa\" returns successfully" Jan 24 01:00:39.997653 kubelet[1912]: I0124 01:00:39.997547 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-cilium-config-path\") pod \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\" (UID: \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\") " Jan 24 01:00:39.997653 kubelet[1912]: I0124 01:00:39.997604 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-hostproc\") pod \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\" (UID: \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\") " Jan 24 01:00:39.997653 kubelet[1912]: I0124 01:00:39.997624 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-bpf-maps\") pod \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\" (UID: \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\") " Jan 24 01:00:39.997653 kubelet[1912]: I0124 01:00:39.997641 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nb6c\" (UniqueName: \"kubernetes.io/projected/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-kube-api-access-4nb6c\") pod \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\" (UID: \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\") " Jan 24 01:00:39.997653 kubelet[1912]: I0124 01:00:39.997656 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-xtables-lock\") pod \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\" (UID: \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\") " Jan 24 01:00:39.997653 kubelet[1912]: I0124 01:00:39.997669 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-lib-modules\") pod \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\" (UID: \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\") " Jan 24 01:00:39.998014 kubelet[1912]: I0124 01:00:39.997729 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-clustermesh-secrets\") pod \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\" (UID: \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\") " Jan 24 01:00:39.998014 kubelet[1912]: I0124 01:00:39.997747 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-host-proc-sys-net\") pod \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\" (UID: \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\") " Jan 24 01:00:39.998014 kubelet[1912]: I0124 01:00:39.997760 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-host-proc-sys-kernel\") pod \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\" (UID: \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\") " Jan 24 01:00:39.998014 kubelet[1912]: I0124 01:00:39.997771 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-cni-path\") pod \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\" (UID: \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\") " Jan 24 01:00:39.998014 kubelet[1912]: I0124 01:00:39.997783 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-etc-cni-netd\") pod \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\" (UID: \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\") " Jan 24 01:00:39.998014 kubelet[1912]: I0124 01:00:39.997765 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-hostproc" (OuterVolumeSpecName: "hostproc") pod "c44e0029-7d4a-4eb8-b11d-a6de9866e4ee" (UID: "c44e0029-7d4a-4eb8-b11d-a6de9866e4ee"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:00:39.998165 kubelet[1912]: I0124 01:00:39.997817 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c44e0029-7d4a-4eb8-b11d-a6de9866e4ee" (UID: "c44e0029-7d4a-4eb8-b11d-a6de9866e4ee"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:00:39.998165 kubelet[1912]: I0124 01:00:39.997798 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-hubble-tls\") pod \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\" (UID: \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\") " Jan 24 01:00:39.998165 kubelet[1912]: I0124 01:00:39.997936 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-cilium-cgroup\") pod \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\" (UID: \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\") " Jan 24 01:00:39.998165 kubelet[1912]: I0124 01:00:39.997962 1912 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-cilium-run\") pod \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\" (UID: \"c44e0029-7d4a-4eb8-b11d-a6de9866e4ee\") " Jan 24 01:00:39.998165 kubelet[1912]: I0124 01:00:39.998004 1912 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-lib-modules\") on node \"10.0.0.161\" DevicePath \"\"" Jan 24 01:00:39.998165 kubelet[1912]: I0124 01:00:39.998014 1912 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-hostproc\") on node \"10.0.0.161\" DevicePath \"\"" Jan 24 01:00:39.998317 kubelet[1912]: I0124 01:00:39.998046 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c44e0029-7d4a-4eb8-b11d-a6de9866e4ee" (UID: "c44e0029-7d4a-4eb8-b11d-a6de9866e4ee"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:00:39.998317 kubelet[1912]: I0124 01:00:39.998064 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c44e0029-7d4a-4eb8-b11d-a6de9866e4ee" (UID: "c44e0029-7d4a-4eb8-b11d-a6de9866e4ee"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:00:39.998317 kubelet[1912]: I0124 01:00:39.998078 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c44e0029-7d4a-4eb8-b11d-a6de9866e4ee" (UID: "c44e0029-7d4a-4eb8-b11d-a6de9866e4ee"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:00:39.998317 kubelet[1912]: I0124 01:00:39.998096 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c44e0029-7d4a-4eb8-b11d-a6de9866e4ee" (UID: "c44e0029-7d4a-4eb8-b11d-a6de9866e4ee"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:00:39.998317 kubelet[1912]: I0124 01:00:39.998112 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c44e0029-7d4a-4eb8-b11d-a6de9866e4ee" (UID: "c44e0029-7d4a-4eb8-b11d-a6de9866e4ee"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:00:40.000753 kubelet[1912]: I0124 01:00:40.000175 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-cni-path" (OuterVolumeSpecName: "cni-path") pod "c44e0029-7d4a-4eb8-b11d-a6de9866e4ee" (UID: "c44e0029-7d4a-4eb8-b11d-a6de9866e4ee"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:00:40.000753 kubelet[1912]: I0124 01:00:40.000206 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c44e0029-7d4a-4eb8-b11d-a6de9866e4ee" (UID: "c44e0029-7d4a-4eb8-b11d-a6de9866e4ee"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:00:40.000753 kubelet[1912]: I0124 01:00:40.000224 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c44e0029-7d4a-4eb8-b11d-a6de9866e4ee" (UID: "c44e0029-7d4a-4eb8-b11d-a6de9866e4ee"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:00:40.001358 kubelet[1912]: I0124 01:00:40.001294 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c44e0029-7d4a-4eb8-b11d-a6de9866e4ee" (UID: "c44e0029-7d4a-4eb8-b11d-a6de9866e4ee"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 01:00:40.002667 kubelet[1912]: I0124 01:00:40.002639 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c44e0029-7d4a-4eb8-b11d-a6de9866e4ee" (UID: "c44e0029-7d4a-4eb8-b11d-a6de9866e4ee"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 01:00:40.002793 systemd[1]: var-lib-kubelet-pods-c44e0029\x2d7d4a\x2d4eb8\x2db11d\x2da6de9866e4ee-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 24 01:00:40.003013 kubelet[1912]: I0124 01:00:40.002799 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c44e0029-7d4a-4eb8-b11d-a6de9866e4ee" (UID: "c44e0029-7d4a-4eb8-b11d-a6de9866e4ee"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 01:00:40.003013 kubelet[1912]: I0124 01:00:40.002990 1912 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-kube-api-access-4nb6c" (OuterVolumeSpecName: "kube-api-access-4nb6c") pod "c44e0029-7d4a-4eb8-b11d-a6de9866e4ee" (UID: "c44e0029-7d4a-4eb8-b11d-a6de9866e4ee"). InnerVolumeSpecName "kube-api-access-4nb6c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 01:00:40.098796 kubelet[1912]: I0124 01:00:40.098557 1912 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-xtables-lock\") on node \"10.0.0.161\" DevicePath \"\"" Jan 24 01:00:40.098796 kubelet[1912]: I0124 01:00:40.098600 1912 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-bpf-maps\") on node \"10.0.0.161\" DevicePath \"\"" Jan 24 01:00:40.098796 kubelet[1912]: I0124 01:00:40.098611 1912 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4nb6c\" (UniqueName: \"kubernetes.io/projected/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-kube-api-access-4nb6c\") on node \"10.0.0.161\" DevicePath \"\"" Jan 24 01:00:40.098796 kubelet[1912]: I0124 01:00:40.098624 1912 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-etc-cni-netd\") on node \"10.0.0.161\" DevicePath \"\"" Jan 24 01:00:40.098796 kubelet[1912]: I0124 01:00:40.098633 1912 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-hubble-tls\") on node \"10.0.0.161\" DevicePath \"\"" Jan 24 01:00:40.098796 kubelet[1912]: I0124 01:00:40.098641 1912 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-clustermesh-secrets\") on node \"10.0.0.161\" DevicePath \"\"" Jan 24 01:00:40.098796 kubelet[1912]: I0124 01:00:40.098649 1912 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-host-proc-sys-net\") on node \"10.0.0.161\" DevicePath \"\"" Jan 24 01:00:40.098796 kubelet[1912]: I0124 01:00:40.098657 1912 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-host-proc-sys-kernel\") on node \"10.0.0.161\" DevicePath \"\"" Jan 24 01:00:40.099125 kubelet[1912]: I0124 01:00:40.098664 1912 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-cni-path\") on node \"10.0.0.161\" DevicePath \"\"" Jan 24 01:00:40.099125 kubelet[1912]: I0124 01:00:40.098671 1912 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-cilium-cgroup\") on node \"10.0.0.161\" DevicePath \"\"" Jan 24 01:00:40.099125 kubelet[1912]: I0124 01:00:40.098736 1912 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-cilium-run\") on node \"10.0.0.161\" DevicePath \"\"" Jan 24 01:00:40.099125 kubelet[1912]: I0124 01:00:40.098744 1912 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee-cilium-config-path\") on node \"10.0.0.161\" DevicePath \"\"" Jan 24 01:00:40.425439 kubelet[1912]: E0124 01:00:40.425276 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:40.685077 kubelet[1912]: I0124 01:00:40.684750 1912 scope.go:117] "RemoveContainer" containerID="988550680b06b009e6ba1340eef5ceea211da724eef24816085583106117c8f2" Jan 24 01:00:40.686547 containerd[1583]: time="2026-01-24T01:00:40.686203881Z" level=info msg="RemoveContainer for \"988550680b06b009e6ba1340eef5ceea211da724eef24816085583106117c8f2\"" Jan 24 01:00:40.690723 containerd[1583]: time="2026-01-24T01:00:40.690316704Z" level=info msg="RemoveContainer for \"988550680b06b009e6ba1340eef5ceea211da724eef24816085583106117c8f2\" returns successfully" Jan 24 01:00:40.690767 kubelet[1912]: I0124 01:00:40.690564 1912 scope.go:117] "RemoveContainer" containerID="54cdfccffe651d24c20e3dbc7b51c502c6082eb8733413a2891134f74538651a" Jan 24 01:00:40.692997 containerd[1583]: time="2026-01-24T01:00:40.692869455Z" level=info msg="RemoveContainer for \"54cdfccffe651d24c20e3dbc7b51c502c6082eb8733413a2891134f74538651a\"" Jan 24 01:00:40.696630 containerd[1583]: time="2026-01-24T01:00:40.696543297Z" level=info msg="RemoveContainer for \"54cdfccffe651d24c20e3dbc7b51c502c6082eb8733413a2891134f74538651a\" returns successfully" Jan 24 01:00:40.696834 kubelet[1912]: I0124 01:00:40.696800 1912 scope.go:117] "RemoveContainer" containerID="428cf4b76f86f2c470a28271861424b0adc136111835a22a4cf0b7d7641395ac" Jan 24 01:00:40.698191 containerd[1583]: time="2026-01-24T01:00:40.698125186Z" level=info msg="RemoveContainer for \"428cf4b76f86f2c470a28271861424b0adc136111835a22a4cf0b7d7641395ac\"" Jan 24 01:00:40.701818 containerd[1583]: time="2026-01-24T01:00:40.701762753Z" level=info msg="RemoveContainer for \"428cf4b76f86f2c470a28271861424b0adc136111835a22a4cf0b7d7641395ac\" returns successfully" Jan 24 01:00:40.702107 kubelet[1912]: I0124 01:00:40.702056 1912 scope.go:117] "RemoveContainer" containerID="f23c6993ab46450c199558f34c60542302ecc1922c28d3813560bf09acb68ce2" Jan 24 01:00:40.703408 containerd[1583]: time="2026-01-24T01:00:40.703348456Z" level=info msg="RemoveContainer for \"f23c6993ab46450c199558f34c60542302ecc1922c28d3813560bf09acb68ce2\"" Jan 24 01:00:40.706554 containerd[1583]: time="2026-01-24T01:00:40.706486269Z" level=info msg="RemoveContainer for \"f23c6993ab46450c199558f34c60542302ecc1922c28d3813560bf09acb68ce2\" returns successfully" Jan 24 01:00:40.706754 kubelet[1912]: I0124 01:00:40.706734 1912 scope.go:117] "RemoveContainer" containerID="f6daced93166329a0f926f886b4addfaff28fca3360c44c7f32dda8e3824ab13" Jan 24 01:00:40.707636 containerd[1583]: time="2026-01-24T01:00:40.707605573Z" level=info msg="RemoveContainer for \"f6daced93166329a0f926f886b4addfaff28fca3360c44c7f32dda8e3824ab13\"" Jan 24 01:00:40.710646 containerd[1583]: time="2026-01-24T01:00:40.710537240Z" level=info msg="RemoveContainer for \"f6daced93166329a0f926f886b4addfaff28fca3360c44c7f32dda8e3824ab13\" returns successfully" Jan 24 01:00:40.710802 kubelet[1912]: I0124 01:00:40.710753 1912 scope.go:117] "RemoveContainer" containerID="988550680b06b009e6ba1340eef5ceea211da724eef24816085583106117c8f2" Jan 24 01:00:40.711048 containerd[1583]: time="2026-01-24T01:00:40.710940466Z" level=error msg="ContainerStatus for \"988550680b06b009e6ba1340eef5ceea211da724eef24816085583106117c8f2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"988550680b06b009e6ba1340eef5ceea211da724eef24816085583106117c8f2\": not found" Jan 24 01:00:40.711193 kubelet[1912]: E0124 01:00:40.711146 1912 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"988550680b06b009e6ba1340eef5ceea211da724eef24816085583106117c8f2\": not found" containerID="988550680b06b009e6ba1340eef5ceea211da724eef24816085583106117c8f2" Jan 24 01:00:40.711274 kubelet[1912]: I0124 01:00:40.711175 1912 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"988550680b06b009e6ba1340eef5ceea211da724eef24816085583106117c8f2"} err="failed to get container status \"988550680b06b009e6ba1340eef5ceea211da724eef24816085583106117c8f2\": rpc error: code = NotFound desc = an error occurred when try to find container \"988550680b06b009e6ba1340eef5ceea211da724eef24816085583106117c8f2\": not found" Jan 24 01:00:40.711274 kubelet[1912]: I0124 01:00:40.711238 1912 scope.go:117] "RemoveContainer" containerID="54cdfccffe651d24c20e3dbc7b51c502c6082eb8733413a2891134f74538651a" Jan 24 01:00:40.711533 containerd[1583]: time="2026-01-24T01:00:40.711480487Z" level=error msg="ContainerStatus for \"54cdfccffe651d24c20e3dbc7b51c502c6082eb8733413a2891134f74538651a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"54cdfccffe651d24c20e3dbc7b51c502c6082eb8733413a2891134f74538651a\": not found" Jan 24 01:00:40.711644 kubelet[1912]: E0124 01:00:40.711610 1912 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"54cdfccffe651d24c20e3dbc7b51c502c6082eb8733413a2891134f74538651a\": not found" containerID="54cdfccffe651d24c20e3dbc7b51c502c6082eb8733413a2891134f74538651a" Jan 24 01:00:40.711750 kubelet[1912]: I0124 01:00:40.711643 1912 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"54cdfccffe651d24c20e3dbc7b51c502c6082eb8733413a2891134f74538651a"} err="failed to get container status \"54cdfccffe651d24c20e3dbc7b51c502c6082eb8733413a2891134f74538651a\": rpc error: code = NotFound desc = an error occurred when try to find container \"54cdfccffe651d24c20e3dbc7b51c502c6082eb8733413a2891134f74538651a\": not found" Jan 24 01:00:40.711750 kubelet[1912]: I0124 01:00:40.711658 1912 scope.go:117] "RemoveContainer" containerID="428cf4b76f86f2c470a28271861424b0adc136111835a22a4cf0b7d7641395ac" Jan 24 01:00:40.711973 containerd[1583]: time="2026-01-24T01:00:40.711916865Z" level=error msg="ContainerStatus for \"428cf4b76f86f2c470a28271861424b0adc136111835a22a4cf0b7d7641395ac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"428cf4b76f86f2c470a28271861424b0adc136111835a22a4cf0b7d7641395ac\": not found" Jan 24 01:00:40.712109 kubelet[1912]: E0124 01:00:40.712057 1912 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"428cf4b76f86f2c470a28271861424b0adc136111835a22a4cf0b7d7641395ac\": not found" containerID="428cf4b76f86f2c470a28271861424b0adc136111835a22a4cf0b7d7641395ac" Jan 24 01:00:40.712109 kubelet[1912]: I0124 01:00:40.712082 1912 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"428cf4b76f86f2c470a28271861424b0adc136111835a22a4cf0b7d7641395ac"} err="failed to get container status \"428cf4b76f86f2c470a28271861424b0adc136111835a22a4cf0b7d7641395ac\": rpc error: code = NotFound desc = an error occurred when try to find container \"428cf4b76f86f2c470a28271861424b0adc136111835a22a4cf0b7d7641395ac\": not found" Jan 24 01:00:40.712109 kubelet[1912]: I0124 01:00:40.712101 1912 scope.go:117] "RemoveContainer" containerID="f23c6993ab46450c199558f34c60542302ecc1922c28d3813560bf09acb68ce2" Jan 24 01:00:40.712332 containerd[1583]: time="2026-01-24T01:00:40.712295830Z" level=error msg="ContainerStatus for \"f23c6993ab46450c199558f34c60542302ecc1922c28d3813560bf09acb68ce2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f23c6993ab46450c199558f34c60542302ecc1922c28d3813560bf09acb68ce2\": not found" Jan 24 01:00:40.712450 kubelet[1912]: E0124 01:00:40.712428 1912 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f23c6993ab46450c199558f34c60542302ecc1922c28d3813560bf09acb68ce2\": not found" containerID="f23c6993ab46450c199558f34c60542302ecc1922c28d3813560bf09acb68ce2" Jan 24 01:00:40.712485 kubelet[1912]: I0124 01:00:40.712448 1912 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f23c6993ab46450c199558f34c60542302ecc1922c28d3813560bf09acb68ce2"} err="failed to get container status \"f23c6993ab46450c199558f34c60542302ecc1922c28d3813560bf09acb68ce2\": rpc error: code = NotFound desc = an error occurred when try to find container \"f23c6993ab46450c199558f34c60542302ecc1922c28d3813560bf09acb68ce2\": not found" Jan 24 01:00:40.712485 kubelet[1912]: I0124 01:00:40.712462 1912 scope.go:117] "RemoveContainer" containerID="f6daced93166329a0f926f886b4addfaff28fca3360c44c7f32dda8e3824ab13" Jan 24 01:00:40.712665 containerd[1583]: time="2026-01-24T01:00:40.712644069Z" level=error msg="ContainerStatus for \"f6daced93166329a0f926f886b4addfaff28fca3360c44c7f32dda8e3824ab13\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f6daced93166329a0f926f886b4addfaff28fca3360c44c7f32dda8e3824ab13\": not found" Jan 24 01:00:40.712852 kubelet[1912]: E0124 01:00:40.712774 1912 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f6daced93166329a0f926f886b4addfaff28fca3360c44c7f32dda8e3824ab13\": not found" containerID="f6daced93166329a0f926f886b4addfaff28fca3360c44c7f32dda8e3824ab13" Jan 24 01:00:40.712852 kubelet[1912]: I0124 01:00:40.712807 1912 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f6daced93166329a0f926f886b4addfaff28fca3360c44c7f32dda8e3824ab13"} err="failed to get container status \"f6daced93166329a0f926f886b4addfaff28fca3360c44c7f32dda8e3824ab13\": rpc error: code = NotFound desc = an error occurred when try to find container \"f6daced93166329a0f926f886b4addfaff28fca3360c44c7f32dda8e3824ab13\": not found" Jan 24 01:00:40.782382 systemd[1]: var-lib-kubelet-pods-c44e0029\x2d7d4a\x2d4eb8\x2db11d\x2da6de9866e4ee-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4nb6c.mount: Deactivated successfully. Jan 24 01:00:40.782600 systemd[1]: var-lib-kubelet-pods-c44e0029\x2d7d4a\x2d4eb8\x2db11d\x2da6de9866e4ee-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 24 01:00:41.425832 kubelet[1912]: E0124 01:00:41.425663 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:41.545361 kubelet[1912]: I0124 01:00:41.545290 1912 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c44e0029-7d4a-4eb8-b11d-a6de9866e4ee" path="/var/lib/kubelet/pods/c44e0029-7d4a-4eb8-b11d-a6de9866e4ee/volumes" Jan 24 01:00:42.244057 kubelet[1912]: I0124 01:00:42.243963 1912 memory_manager.go:355] "RemoveStaleState removing state" podUID="c44e0029-7d4a-4eb8-b11d-a6de9866e4ee" containerName="cilium-agent" Jan 24 01:00:42.314759 kubelet[1912]: I0124 01:00:42.314598 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/12212771-d868-48d3-985a-fd2b7f22bb48-host-proc-sys-net\") pod \"cilium-zlpqp\" (UID: \"12212771-d868-48d3-985a-fd2b7f22bb48\") " pod="kube-system/cilium-zlpqp" Jan 24 01:00:42.314759 kubelet[1912]: I0124 01:00:42.314647 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/12212771-d868-48d3-985a-fd2b7f22bb48-bpf-maps\") pod \"cilium-zlpqp\" (UID: \"12212771-d868-48d3-985a-fd2b7f22bb48\") " pod="kube-system/cilium-zlpqp" Jan 24 01:00:42.314759 kubelet[1912]: I0124 01:00:42.314668 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/12212771-d868-48d3-985a-fd2b7f22bb48-cilium-cgroup\") pod \"cilium-zlpqp\" (UID: \"12212771-d868-48d3-985a-fd2b7f22bb48\") " pod="kube-system/cilium-zlpqp" Jan 24 01:00:42.314759 kubelet[1912]: I0124 01:00:42.314732 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12212771-d868-48d3-985a-fd2b7f22bb48-lib-modules\") pod \"cilium-zlpqp\" (UID: \"12212771-d868-48d3-985a-fd2b7f22bb48\") " pod="kube-system/cilium-zlpqp" Jan 24 01:00:42.314759 kubelet[1912]: I0124 01:00:42.314748 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/12212771-d868-48d3-985a-fd2b7f22bb48-cilium-ipsec-secrets\") pod \"cilium-zlpqp\" (UID: \"12212771-d868-48d3-985a-fd2b7f22bb48\") " pod="kube-system/cilium-zlpqp" Jan 24 01:00:42.314759 kubelet[1912]: I0124 01:00:42.314761 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/12212771-d868-48d3-985a-fd2b7f22bb48-hostproc\") pod \"cilium-zlpqp\" (UID: \"12212771-d868-48d3-985a-fd2b7f22bb48\") " pod="kube-system/cilium-zlpqp" Jan 24 01:00:42.315054 kubelet[1912]: I0124 01:00:42.314776 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/12212771-d868-48d3-985a-fd2b7f22bb48-etc-cni-netd\") pod \"cilium-zlpqp\" (UID: \"12212771-d868-48d3-985a-fd2b7f22bb48\") " pod="kube-system/cilium-zlpqp" Jan 24 01:00:42.315054 kubelet[1912]: I0124 01:00:42.314792 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12212771-d868-48d3-985a-fd2b7f22bb48-cilium-config-path\") pod \"cilium-zlpqp\" (UID: \"12212771-d868-48d3-985a-fd2b7f22bb48\") " pod="kube-system/cilium-zlpqp" Jan 24 01:00:42.315054 kubelet[1912]: I0124 01:00:42.314810 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/12212771-d868-48d3-985a-fd2b7f22bb48-cilium-run\") pod \"cilium-zlpqp\" (UID: \"12212771-d868-48d3-985a-fd2b7f22bb48\") " pod="kube-system/cilium-zlpqp" Jan 24 01:00:42.315054 kubelet[1912]: I0124 01:00:42.314861 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12212771-d868-48d3-985a-fd2b7f22bb48-xtables-lock\") pod \"cilium-zlpqp\" (UID: \"12212771-d868-48d3-985a-fd2b7f22bb48\") " pod="kube-system/cilium-zlpqp" Jan 24 01:00:42.315054 kubelet[1912]: I0124 01:00:42.314941 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/12212771-d868-48d3-985a-fd2b7f22bb48-clustermesh-secrets\") pod \"cilium-zlpqp\" (UID: \"12212771-d868-48d3-985a-fd2b7f22bb48\") " pod="kube-system/cilium-zlpqp" Jan 24 01:00:42.315054 kubelet[1912]: I0124 01:00:42.314979 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/12212771-d868-48d3-985a-fd2b7f22bb48-hubble-tls\") pod \"cilium-zlpqp\" (UID: \"12212771-d868-48d3-985a-fd2b7f22bb48\") " pod="kube-system/cilium-zlpqp" Jan 24 01:00:42.315185 kubelet[1912]: I0124 01:00:42.315007 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qkxd\" (UniqueName: \"kubernetes.io/projected/12212771-d868-48d3-985a-fd2b7f22bb48-kube-api-access-9qkxd\") pod \"cilium-zlpqp\" (UID: \"12212771-d868-48d3-985a-fd2b7f22bb48\") " pod="kube-system/cilium-zlpqp" Jan 24 01:00:42.315185 kubelet[1912]: I0124 01:00:42.315040 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83ed7cc3-edef-4ce1-aa47-67f0dbe4d4fb-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-vfd5m\" (UID: \"83ed7cc3-edef-4ce1-aa47-67f0dbe4d4fb\") " pod="kube-system/cilium-operator-6c4d7847fc-vfd5m" Jan 24 01:00:42.315185 kubelet[1912]: I0124 01:00:42.315070 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-747k9\" (UniqueName: \"kubernetes.io/projected/83ed7cc3-edef-4ce1-aa47-67f0dbe4d4fb-kube-api-access-747k9\") pod \"cilium-operator-6c4d7847fc-vfd5m\" (UID: \"83ed7cc3-edef-4ce1-aa47-67f0dbe4d4fb\") " pod="kube-system/cilium-operator-6c4d7847fc-vfd5m" Jan 24 01:00:42.315185 kubelet[1912]: I0124 01:00:42.315096 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/12212771-d868-48d3-985a-fd2b7f22bb48-cni-path\") pod \"cilium-zlpqp\" (UID: \"12212771-d868-48d3-985a-fd2b7f22bb48\") " pod="kube-system/cilium-zlpqp" Jan 24 01:00:42.315185 kubelet[1912]: I0124 01:00:42.315119 1912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/12212771-d868-48d3-985a-fd2b7f22bb48-host-proc-sys-kernel\") pod \"cilium-zlpqp\" (UID: \"12212771-d868-48d3-985a-fd2b7f22bb48\") " pod="kube-system/cilium-zlpqp" Jan 24 01:00:42.426016 kubelet[1912]: E0124 01:00:42.425952 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:42.548456 kubelet[1912]: E0124 01:00:42.548293 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:42.548926 containerd[1583]: time="2026-01-24T01:00:42.548855070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vfd5m,Uid:83ed7cc3-edef-4ce1-aa47-67f0dbe4d4fb,Namespace:kube-system,Attempt:0,}" Jan 24 01:00:42.578261 containerd[1583]: time="2026-01-24T01:00:42.577950445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 01:00:42.578261 containerd[1583]: time="2026-01-24T01:00:42.578005658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 01:00:42.578261 containerd[1583]: time="2026-01-24T01:00:42.578015826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:00:42.578261 containerd[1583]: time="2026-01-24T01:00:42.578126132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:00:42.581059 kubelet[1912]: E0124 01:00:42.580989 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:42.582639 containerd[1583]: time="2026-01-24T01:00:42.581869573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zlpqp,Uid:12212771-d868-48d3-985a-fd2b7f22bb48,Namespace:kube-system,Attempt:0,}" Jan 24 01:00:42.610118 containerd[1583]: time="2026-01-24T01:00:42.610032014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 01:00:42.611452 containerd[1583]: time="2026-01-24T01:00:42.611335171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 01:00:42.611554 containerd[1583]: time="2026-01-24T01:00:42.611500579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:00:42.611853 containerd[1583]: time="2026-01-24T01:00:42.611768828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:00:42.651864 containerd[1583]: time="2026-01-24T01:00:42.651767228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vfd5m,Uid:83ed7cc3-edef-4ce1-aa47-67f0dbe4d4fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"61f0843a5d3fbbef10f8cfd0afb31f68d1968cf6ce2c5152a3bac009389442cd\"" Jan 24 01:00:42.653455 kubelet[1912]: E0124 01:00:42.653411 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:42.655759 containerd[1583]: time="2026-01-24T01:00:42.655178915Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 24 01:00:42.661880 containerd[1583]: time="2026-01-24T01:00:42.661796880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zlpqp,Uid:12212771-d868-48d3-985a-fd2b7f22bb48,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c19e9a324862915654f7eb53945dacce4ac0236b9c95f889bf257a39b9b1b50\"" Jan 24 01:00:42.662638 kubelet[1912]: E0124 01:00:42.662570 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:42.665031 containerd[1583]: time="2026-01-24T01:00:42.664960435Z" level=info msg="CreateContainer within sandbox \"4c19e9a324862915654f7eb53945dacce4ac0236b9c95f889bf257a39b9b1b50\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 24 01:00:42.678101 containerd[1583]: time="2026-01-24T01:00:42.677985223Z" level=info msg="CreateContainer within sandbox \"4c19e9a324862915654f7eb53945dacce4ac0236b9c95f889bf257a39b9b1b50\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c5549c5fb7e6ba4085965e73464c8a610b8689d85141aba96f354cb8cd71dc25\"" Jan 24 01:00:42.678606 containerd[1583]: time="2026-01-24T01:00:42.678579455Z" level=info msg="StartContainer for \"c5549c5fb7e6ba4085965e73464c8a610b8689d85141aba96f354cb8cd71dc25\"" Jan 24 01:00:42.738034 containerd[1583]: time="2026-01-24T01:00:42.737965669Z" level=info msg="StartContainer for \"c5549c5fb7e6ba4085965e73464c8a610b8689d85141aba96f354cb8cd71dc25\" returns successfully" Jan 24 01:00:42.787215 containerd[1583]: time="2026-01-24T01:00:42.787152894Z" level=info msg="shim disconnected" id=c5549c5fb7e6ba4085965e73464c8a610b8689d85141aba96f354cb8cd71dc25 namespace=k8s.io Jan 24 01:00:42.787215 containerd[1583]: time="2026-01-24T01:00:42.787210322Z" level=warning msg="cleaning up after shim disconnected" id=c5549c5fb7e6ba4085965e73464c8a610b8689d85141aba96f354cb8cd71dc25 namespace=k8s.io Jan 24 01:00:42.787215 containerd[1583]: time="2026-01-24T01:00:42.787219279Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 01:00:43.426798 kubelet[1912]: E0124 01:00:43.426765 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:43.632381 containerd[1583]: time="2026-01-24T01:00:43.632297624Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:00:43.633224 containerd[1583]: time="2026-01-24T01:00:43.633166582Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 24 01:00:43.634280 containerd[1583]: time="2026-01-24T01:00:43.634237086Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:00:43.635738 containerd[1583]: time="2026-01-24T01:00:43.635647033Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 980.424357ms" Jan 24 01:00:43.635788 containerd[1583]: time="2026-01-24T01:00:43.635736240Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 24 01:00:43.637884 containerd[1583]: time="2026-01-24T01:00:43.637798120Z" level=info msg="CreateContainer within sandbox \"61f0843a5d3fbbef10f8cfd0afb31f68d1968cf6ce2c5152a3bac009389442cd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 24 01:00:43.648337 containerd[1583]: time="2026-01-24T01:00:43.648277335Z" level=info msg="CreateContainer within sandbox \"61f0843a5d3fbbef10f8cfd0afb31f68d1968cf6ce2c5152a3bac009389442cd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"acb982016c4377adcccaf2702d5adaf3ed4f1be6261b619292ef68dc1a226c93\"" Jan 24 01:00:43.648645 containerd[1583]: time="2026-01-24T01:00:43.648624606Z" level=info msg="StartContainer for \"acb982016c4377adcccaf2702d5adaf3ed4f1be6261b619292ef68dc1a226c93\"" Jan 24 01:00:43.696384 kubelet[1912]: E0124 01:00:43.696029 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:43.699773 containerd[1583]: time="2026-01-24T01:00:43.699620205Z" level=info msg="StartContainer for \"acb982016c4377adcccaf2702d5adaf3ed4f1be6261b619292ef68dc1a226c93\" returns successfully" Jan 24 01:00:43.700864 containerd[1583]: time="2026-01-24T01:00:43.700755368Z" level=info msg="CreateContainer within sandbox \"4c19e9a324862915654f7eb53945dacce4ac0236b9c95f889bf257a39b9b1b50\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 24 01:00:43.719359 containerd[1583]: time="2026-01-24T01:00:43.719214434Z" level=info msg="CreateContainer within sandbox \"4c19e9a324862915654f7eb53945dacce4ac0236b9c95f889bf257a39b9b1b50\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"23fb749c42e710b24a2931bb7a76bf9fb3416a947f46e73f5ae7b305bf785e99\"" Jan 24 01:00:43.720039 containerd[1583]: time="2026-01-24T01:00:43.720018321Z" level=info msg="StartContainer for \"23fb749c42e710b24a2931bb7a76bf9fb3416a947f46e73f5ae7b305bf785e99\"" Jan 24 01:00:43.826212 containerd[1583]: time="2026-01-24T01:00:43.826077288Z" level=info msg="StartContainer for \"23fb749c42e710b24a2931bb7a76bf9fb3416a947f46e73f5ae7b305bf785e99\" returns successfully" Jan 24 01:00:43.859747 containerd[1583]: time="2026-01-24T01:00:43.859536482Z" level=info msg="shim disconnected" id=23fb749c42e710b24a2931bb7a76bf9fb3416a947f46e73f5ae7b305bf785e99 namespace=k8s.io Jan 24 01:00:43.859747 containerd[1583]: time="2026-01-24T01:00:43.859609418Z" level=warning msg="cleaning up after shim disconnected" id=23fb749c42e710b24a2931bb7a76bf9fb3416a947f46e73f5ae7b305bf785e99 namespace=k8s.io Jan 24 01:00:43.859747 containerd[1583]: time="2026-01-24T01:00:43.859623364Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 01:00:44.427058 kubelet[1912]: E0124 01:00:44.426979 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:44.522976 kubelet[1912]: E0124 01:00:44.522794 1912 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 24 01:00:44.706191 kubelet[1912]: E0124 01:00:44.705674 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:44.706671 kubelet[1912]: E0124 01:00:44.706615 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:44.708046 containerd[1583]: time="2026-01-24T01:00:44.707972662Z" level=info msg="CreateContainer within sandbox \"4c19e9a324862915654f7eb53945dacce4ac0236b9c95f889bf257a39b9b1b50\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 24 01:00:44.727383 containerd[1583]: time="2026-01-24T01:00:44.727305853Z" level=info msg="CreateContainer within sandbox \"4c19e9a324862915654f7eb53945dacce4ac0236b9c95f889bf257a39b9b1b50\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8d76abafdaef80e36cd52184f86738c1b638b128549c72241c5d973a28edab10\"" Jan 24 01:00:44.727936 containerd[1583]: time="2026-01-24T01:00:44.727833999Z" level=info msg="StartContainer for \"8d76abafdaef80e36cd52184f86738c1b638b128549c72241c5d973a28edab10\"" Jan 24 01:00:44.791049 containerd[1583]: time="2026-01-24T01:00:44.790981477Z" level=info msg="StartContainer for \"8d76abafdaef80e36cd52184f86738c1b638b128549c72241c5d973a28edab10\" returns successfully" Jan 24 01:00:44.818036 containerd[1583]: time="2026-01-24T01:00:44.817972411Z" level=info msg="shim disconnected" id=8d76abafdaef80e36cd52184f86738c1b638b128549c72241c5d973a28edab10 namespace=k8s.io Jan 24 01:00:44.818036 containerd[1583]: time="2026-01-24T01:00:44.818030920Z" level=warning msg="cleaning up after shim disconnected" id=8d76abafdaef80e36cd52184f86738c1b638b128549c72241c5d973a28edab10 namespace=k8s.io Jan 24 01:00:44.818036 containerd[1583]: time="2026-01-24T01:00:44.818039767Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 01:00:44.832073 containerd[1583]: time="2026-01-24T01:00:44.831992896Z" level=warning msg="cleanup warnings time=\"2026-01-24T01:00:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 24 01:00:45.422259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d76abafdaef80e36cd52184f86738c1b638b128549c72241c5d973a28edab10-rootfs.mount: Deactivated successfully. Jan 24 01:00:45.427957 kubelet[1912]: E0124 01:00:45.427852 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:45.711020 kubelet[1912]: E0124 01:00:45.710841 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:45.711020 kubelet[1912]: E0124 01:00:45.710951 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:45.712875 containerd[1583]: time="2026-01-24T01:00:45.712806431Z" level=info msg="CreateContainer within sandbox \"4c19e9a324862915654f7eb53945dacce4ac0236b9c95f889bf257a39b9b1b50\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 24 01:00:45.728049 kubelet[1912]: I0124 01:00:45.726583 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-vfd5m" podStartSLOduration=2.7448261560000002 podStartE2EDuration="3.726570295s" podCreationTimestamp="2026-01-24 01:00:42 +0000 UTC" firstStartedPulling="2026-01-24 01:00:42.654843951 +0000 UTC m=+53.536709193" lastFinishedPulling="2026-01-24 01:00:43.63658809 +0000 UTC m=+54.518453332" observedRunningTime="2026-01-24 01:00:44.730285405 +0000 UTC m=+55.612150647" watchObservedRunningTime="2026-01-24 01:00:45.726570295 +0000 UTC m=+56.608435537" Jan 24 01:00:45.727381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1068992381.mount: Deactivated successfully. Jan 24 01:00:45.729884 containerd[1583]: time="2026-01-24T01:00:45.729823893Z" level=info msg="CreateContainer within sandbox \"4c19e9a324862915654f7eb53945dacce4ac0236b9c95f889bf257a39b9b1b50\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"272202733e77fe87d393b5584ba3d9029ee3edad5b514036d3eed9918e4bd8c0\"" Jan 24 01:00:45.730377 containerd[1583]: time="2026-01-24T01:00:45.730349095Z" level=info msg="StartContainer for \"272202733e77fe87d393b5584ba3d9029ee3edad5b514036d3eed9918e4bd8c0\"" Jan 24 01:00:45.786290 containerd[1583]: time="2026-01-24T01:00:45.786122343Z" level=info msg="StartContainer for \"272202733e77fe87d393b5584ba3d9029ee3edad5b514036d3eed9918e4bd8c0\" returns successfully" Jan 24 01:00:45.810787 containerd[1583]: time="2026-01-24T01:00:45.810725640Z" level=info msg="shim disconnected" id=272202733e77fe87d393b5584ba3d9029ee3edad5b514036d3eed9918e4bd8c0 namespace=k8s.io Jan 24 01:00:45.810787 containerd[1583]: time="2026-01-24T01:00:45.810781824Z" level=warning msg="cleaning up after shim disconnected" id=272202733e77fe87d393b5584ba3d9029ee3edad5b514036d3eed9918e4bd8c0 namespace=k8s.io Jan 24 01:00:45.810787 containerd[1583]: time="2026-01-24T01:00:45.810791211Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 01:00:46.422307 systemd[1]: run-containerd-runc-k8s.io-272202733e77fe87d393b5584ba3d9029ee3edad5b514036d3eed9918e4bd8c0-runc.4lqKqZ.mount: Deactivated successfully. Jan 24 01:00:46.422500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-272202733e77fe87d393b5584ba3d9029ee3edad5b514036d3eed9918e4bd8c0-rootfs.mount: Deactivated successfully. Jan 24 01:00:46.428352 kubelet[1912]: E0124 01:00:46.428299 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:46.715758 kubelet[1912]: E0124 01:00:46.715490 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:46.718041 containerd[1583]: time="2026-01-24T01:00:46.717985080Z" level=info msg="CreateContainer within sandbox \"4c19e9a324862915654f7eb53945dacce4ac0236b9c95f889bf257a39b9b1b50\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 24 01:00:46.735855 containerd[1583]: time="2026-01-24T01:00:46.735787097Z" level=info msg="CreateContainer within sandbox \"4c19e9a324862915654f7eb53945dacce4ac0236b9c95f889bf257a39b9b1b50\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b822384545fdc93a5ff33777f750ee969bb1b43d017531ada7d489800062ae32\"" Jan 24 01:00:46.736629 containerd[1583]: time="2026-01-24T01:00:46.736536426Z" level=info msg="StartContainer for \"b822384545fdc93a5ff33777f750ee969bb1b43d017531ada7d489800062ae32\"" Jan 24 01:00:46.807627 containerd[1583]: time="2026-01-24T01:00:46.807572884Z" level=info msg="StartContainer for \"b822384545fdc93a5ff33777f750ee969bb1b43d017531ada7d489800062ae32\" returns successfully" Jan 24 01:00:47.201760 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 24 01:00:47.429037 kubelet[1912]: E0124 01:00:47.428997 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:47.719777 kubelet[1912]: E0124 01:00:47.719746 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:47.733002 kubelet[1912]: I0124 01:00:47.732903 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zlpqp" podStartSLOduration=5.732892948 podStartE2EDuration="5.732892948s" podCreationTimestamp="2026-01-24 01:00:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 01:00:47.732316522 +0000 UTC m=+58.614181764" watchObservedRunningTime="2026-01-24 01:00:47.732892948 +0000 UTC m=+58.614758191" Jan 24 01:00:48.429756 kubelet[1912]: E0124 01:00:48.429634 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:48.721177 kubelet[1912]: E0124 01:00:48.721047 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:49.388164 kubelet[1912]: E0124 01:00:49.388115 1912 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:49.406279 containerd[1583]: time="2026-01-24T01:00:49.406180446Z" level=info msg="StopPodSandbox for \"2db05ea4d70c6ead23483e9791a3eee5836bc73c917d29b3c6bdeec3c58350fa\"" Jan 24 01:00:49.406279 containerd[1583]: time="2026-01-24T01:00:49.406270544Z" level=info msg="TearDown network for sandbox \"2db05ea4d70c6ead23483e9791a3eee5836bc73c917d29b3c6bdeec3c58350fa\" successfully" Jan 24 01:00:49.406279 containerd[1583]: time="2026-01-24T01:00:49.406281104Z" level=info msg="StopPodSandbox for \"2db05ea4d70c6ead23483e9791a3eee5836bc73c917d29b3c6bdeec3c58350fa\" returns successfully" Jan 24 01:00:49.407084 containerd[1583]: time="2026-01-24T01:00:49.407064669Z" level=info msg="RemovePodSandbox for \"2db05ea4d70c6ead23483e9791a3eee5836bc73c917d29b3c6bdeec3c58350fa\"" Jan 24 01:00:49.407204 containerd[1583]: time="2026-01-24T01:00:49.407087702Z" level=info msg="Forcibly stopping sandbox \"2db05ea4d70c6ead23483e9791a3eee5836bc73c917d29b3c6bdeec3c58350fa\"" Jan 24 01:00:49.407204 containerd[1583]: time="2026-01-24T01:00:49.407134149Z" level=info msg="TearDown network for sandbox \"2db05ea4d70c6ead23483e9791a3eee5836bc73c917d29b3c6bdeec3c58350fa\" successfully" Jan 24 01:00:49.424492 containerd[1583]: time="2026-01-24T01:00:49.424465208Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2db05ea4d70c6ead23483e9791a3eee5836bc73c917d29b3c6bdeec3c58350fa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 01:00:49.424593 containerd[1583]: time="2026-01-24T01:00:49.424501166Z" level=info msg="RemovePodSandbox \"2db05ea4d70c6ead23483e9791a3eee5836bc73c917d29b3c6bdeec3c58350fa\" returns successfully" Jan 24 01:00:49.430083 kubelet[1912]: E0124 01:00:49.430054 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:49.722740 kubelet[1912]: E0124 01:00:49.722557 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:50.358406 systemd-networkd[1252]: lxc_health: Link UP Jan 24 01:00:50.365401 systemd-networkd[1252]: lxc_health: Gained carrier Jan 24 01:00:50.430232 kubelet[1912]: E0124 01:00:50.430182 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:50.724800 kubelet[1912]: E0124 01:00:50.724655 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:50.818373 systemd[1]: run-containerd-runc-k8s.io-b822384545fdc93a5ff33777f750ee969bb1b43d017531ada7d489800062ae32-runc.TjatJM.mount: Deactivated successfully. Jan 24 01:00:51.431250 kubelet[1912]: E0124 01:00:51.431211 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:51.726003 kubelet[1912]: E0124 01:00:51.725858 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:52.226049 systemd-networkd[1252]: lxc_health: Gained IPv6LL Jan 24 01:00:52.431520 kubelet[1912]: E0124 01:00:52.431438 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:52.727620 kubelet[1912]: E0124 01:00:52.727529 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:53.433935 kubelet[1912]: E0124 01:00:53.433858 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:54.434919 kubelet[1912]: E0124 01:00:54.434748 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:55.435019 kubelet[1912]: E0124 01:00:55.434932 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:56.435880 kubelet[1912]: E0124 01:00:56.435755 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:57.436354 kubelet[1912]: E0124 01:00:57.436288 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 01:00:58.437362 kubelet[1912]: E0124 01:00:58.437239 1912 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"