Apr 17 03:02:45.764389 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Apr 16 22:00:21 -00 2026 Apr 17 03:02:45.764406 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 17 03:02:45.764414 kernel: BIOS-provided physical RAM map: Apr 17 03:02:45.764419 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 17 03:02:45.764424 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 17 03:02:45.764428 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 17 03:02:45.764433 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 17 03:02:45.764438 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 17 03:02:45.764442 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 17 03:02:45.764446 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 17 03:02:45.764451 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 17 03:02:45.764457 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 17 03:02:45.764461 kernel: NX (Execute Disable) protection: active Apr 17 03:02:45.764466 kernel: APIC: Static calls initialized Apr 17 03:02:45.764471 kernel: SMBIOS 2.8 present. Apr 17 03:02:45.764476 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 17 03:02:45.764482 kernel: DMI: Memory slots populated: 1/1 Apr 17 03:02:45.764487 kernel: Hypervisor detected: KVM Apr 17 03:02:45.764491 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 17 03:02:45.764496 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 03:02:45.764501 kernel: kvm-clock: using sched offset of 4724571629 cycles Apr 17 03:02:45.764506 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 03:02:45.764511 kernel: tsc: Detected 2793.438 MHz processor Apr 17 03:02:45.764516 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 03:02:45.764521 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 03:02:45.764526 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 17 03:02:45.764533 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 17 03:02:45.764537 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 03:02:45.764542 kernel: Using GB pages for direct mapping Apr 17 03:02:45.764547 kernel: ACPI: Early table checksum verification disabled Apr 17 03:02:45.764552 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 17 03:02:45.764557 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 03:02:45.764562 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 03:02:45.764567 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 03:02:45.764574 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 17 03:02:45.764582 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 03:02:45.764587 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 03:02:45.764591 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 03:02:45.764596 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 03:02:45.764601 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 17 03:02:45.764608 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 17 03:02:45.764635 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 17 03:02:45.764640 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 17 03:02:45.764645 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 17 03:02:45.764650 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 17 03:02:45.764655 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 17 03:02:45.764660 kernel: No NUMA configuration found Apr 17 03:02:45.764665 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 17 03:02:45.764670 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Apr 17 03:02:45.764677 kernel: Zone ranges: Apr 17 03:02:45.764682 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 03:02:45.764687 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 17 03:02:45.764692 kernel: Normal empty Apr 17 03:02:45.764697 kernel: Device empty Apr 17 03:02:45.764702 kernel: Movable zone start for each node Apr 17 03:02:45.764707 kernel: Early memory node ranges Apr 17 03:02:45.764712 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 17 03:02:45.764717 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 17 03:02:45.764722 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 17 03:02:45.764728 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 03:02:45.764734 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 17 03:02:45.764739 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 17 03:02:45.764744 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 17 03:02:45.764749 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 03:02:45.764754 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 17 03:02:45.764759 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 17 03:02:45.764764 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 03:02:45.764769 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 03:02:45.764775 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 03:02:45.764780 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 03:02:45.764785 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 03:02:45.764790 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 17 03:02:45.764795 kernel: TSC deadline timer available Apr 17 03:02:45.764800 kernel: CPU topo: Max. logical packages: 1 Apr 17 03:02:45.764805 kernel: CPU topo: Max. logical dies: 1 Apr 17 03:02:45.764810 kernel: CPU topo: Max. dies per package: 1 Apr 17 03:02:45.764815 kernel: CPU topo: Max. threads per core: 1 Apr 17 03:02:45.764821 kernel: CPU topo: Num. cores per package: 4 Apr 17 03:02:45.764826 kernel: CPU topo: Num. threads per package: 4 Apr 17 03:02:45.764831 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 17 03:02:45.764836 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 17 03:02:45.764841 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 17 03:02:45.764846 kernel: kvm-guest: setup PV sched yield Apr 17 03:02:45.764851 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 17 03:02:45.764856 kernel: Booting paravirtualized kernel on KVM Apr 17 03:02:45.764861 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 03:02:45.764867 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 17 03:02:45.764873 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u524288 Apr 17 03:02:45.764878 kernel: pcpu-alloc: s207448 r8192 d30120 u524288 alloc=1*2097152 Apr 17 03:02:45.764883 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 17 03:02:45.764888 kernel: kvm-guest: PV spinlocks enabled Apr 17 03:02:45.764893 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 03:02:45.764899 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 17 03:02:45.764904 kernel: random: crng init done Apr 17 03:02:45.764942 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 03:02:45.764949 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 03:02:45.764954 kernel: Fallback order for Node 0: 0 Apr 17 03:02:45.764960 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Apr 17 03:02:45.764965 kernel: Policy zone: DMA32 Apr 17 03:02:45.764970 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 03:02:45.764975 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 17 03:02:45.764980 kernel: ftrace: allocating 40126 entries in 157 pages Apr 17 03:02:45.764985 kernel: ftrace: allocated 157 pages with 5 groups Apr 17 03:02:45.764990 kernel: Dynamic Preempt: voluntary Apr 17 03:02:45.764997 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 03:02:45.765003 kernel: rcu: RCU event tracing is enabled. Apr 17 03:02:45.765008 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 17 03:02:45.765013 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 03:02:45.765018 kernel: Rude variant of Tasks RCU enabled. Apr 17 03:02:45.765023 kernel: Tracing variant of Tasks RCU enabled. Apr 17 03:02:45.765028 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 03:02:45.765033 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 17 03:02:45.765038 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 03:02:45.765043 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 03:02:45.765050 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 03:02:45.765056 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 17 03:02:45.765061 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 03:02:45.765066 kernel: Console: colour VGA+ 80x25 Apr 17 03:02:45.765076 kernel: printk: legacy console [ttyS0] enabled Apr 17 03:02:45.765083 kernel: ACPI: Core revision 20240827 Apr 17 03:02:45.765089 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 17 03:02:45.765095 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 03:02:45.765100 kernel: x2apic enabled Apr 17 03:02:45.765106 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 03:02:45.765111 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 17 03:02:45.765119 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 17 03:02:45.765125 kernel: kvm-guest: setup PV IPIs Apr 17 03:02:45.765130 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 17 03:02:45.765136 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 17 03:02:45.765141 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 17 03:02:45.765148 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 17 03:02:45.765154 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 17 03:02:45.765160 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 17 03:02:45.765165 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 03:02:45.765171 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 03:02:45.765176 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 03:02:45.765182 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 17 03:02:45.765187 kernel: RETBleed: Vulnerable Apr 17 03:02:45.765193 kernel: Speculative Store Bypass: Vulnerable Apr 17 03:02:45.765201 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 03:02:45.765206 kernel: GDS: Unknown: Dependent on hypervisor status Apr 17 03:02:45.765212 kernel: active return thunk: its_return_thunk Apr 17 03:02:45.765217 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 03:02:45.765223 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 03:02:45.765228 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 03:02:45.765234 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 03:02:45.765239 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 17 03:02:45.765245 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 17 03:02:45.765252 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 17 03:02:45.765257 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 03:02:45.765263 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 17 03:02:45.765268 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 17 03:02:45.765274 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 17 03:02:45.765279 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 17 03:02:45.765285 kernel: Freeing SMP alternatives memory: 32K Apr 17 03:02:45.765290 kernel: pid_max: default: 32768 minimum: 301 Apr 17 03:02:45.765296 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 17 03:02:45.765303 kernel: landlock: Up and running. Apr 17 03:02:45.765308 kernel: SELinux: Initializing. Apr 17 03:02:45.765314 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 03:02:45.765319 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 03:02:45.765325 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 17 03:02:45.765331 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 17 03:02:45.765336 kernel: signal: max sigframe size: 3632 Apr 17 03:02:45.765342 kernel: rcu: Hierarchical SRCU implementation. Apr 17 03:02:45.765348 kernel: rcu: Max phase no-delay instances is 400. Apr 17 03:02:45.765355 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 17 03:02:45.765360 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 03:02:45.765366 kernel: smp: Bringing up secondary CPUs ... Apr 17 03:02:45.765371 kernel: smpboot: x86: Booting SMP configuration: Apr 17 03:02:45.765393 kernel: .... node #0, CPUs: #1 #2 #3 Apr 17 03:02:45.765398 kernel: smp: Brought up 1 node, 4 CPUs Apr 17 03:02:45.765404 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 17 03:02:45.765410 kernel: Memory: 2419756K/2571752K available (14336K kernel code, 2453K rwdata, 26076K rodata, 46216K init, 2532K bss, 146108K reserved, 0K cma-reserved) Apr 17 03:02:45.765417 kernel: devtmpfs: initialized Apr 17 03:02:45.765423 kernel: x86/mm: Memory block size: 128MB Apr 17 03:02:45.765428 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 03:02:45.765434 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 17 03:02:45.765440 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 03:02:45.765445 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 03:02:45.765451 kernel: audit: initializing netlink subsys (disabled) Apr 17 03:02:45.765456 kernel: audit: type=2000 audit(1776394962.988:1): state=initialized audit_enabled=0 res=1 Apr 17 03:02:45.765462 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 03:02:45.765468 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 03:02:45.765474 kernel: cpuidle: using governor menu Apr 17 03:02:45.765480 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 03:02:45.765485 kernel: dca service started, version 1.12.1 Apr 17 03:02:45.765491 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Apr 17 03:02:45.765496 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 17 03:02:45.765502 kernel: PCI: Using configuration type 1 for base access Apr 17 03:02:45.765507 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 03:02:45.765513 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 03:02:45.765520 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 03:02:45.765526 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 03:02:45.765531 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 03:02:45.765537 kernel: ACPI: Added _OSI(Module Device) Apr 17 03:02:45.765542 kernel: ACPI: Added _OSI(Processor Device) Apr 17 03:02:45.765548 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 03:02:45.765553 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 03:02:45.765559 kernel: ACPI: Interpreter enabled Apr 17 03:02:45.765564 kernel: ACPI: PM: (supports S0 S3 S5) Apr 17 03:02:45.765571 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 03:02:45.765577 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 03:02:45.765582 kernel: PCI: Using E820 reservations for host bridge windows Apr 17 03:02:45.765588 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 17 03:02:45.765593 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 03:02:45.765708 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 17 03:02:45.765766 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 17 03:02:45.765818 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 17 03:02:45.765827 kernel: PCI host bridge to bus 0000:00 Apr 17 03:02:45.765901 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 03:02:45.765979 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 03:02:45.766027 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 03:02:45.766074 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 17 03:02:45.766120 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 17 03:02:45.766166 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 17 03:02:45.766216 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 03:02:45.766287 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 17 03:02:45.766353 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 17 03:02:45.766407 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Apr 17 03:02:45.766460 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Apr 17 03:02:45.766512 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Apr 17 03:02:45.766566 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 17 03:02:45.766641 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 17 03:02:45.766698 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Apr 17 03:02:45.766752 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Apr 17 03:02:45.766805 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Apr 17 03:02:45.766864 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 17 03:02:45.766939 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Apr 17 03:02:45.767027 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Apr 17 03:02:45.767119 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Apr 17 03:02:45.767197 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 17 03:02:45.767251 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Apr 17 03:02:45.767306 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Apr 17 03:02:45.767358 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 17 03:02:45.767411 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Apr 17 03:02:45.767469 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 17 03:02:45.767523 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 17 03:02:45.767579 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 17 03:02:45.767649 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Apr 17 03:02:45.767714 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Apr 17 03:02:45.767772 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 17 03:02:45.767828 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Apr 17 03:02:45.767836 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 03:02:45.767842 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 03:02:45.767847 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 03:02:45.767853 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 03:02:45.767859 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 17 03:02:45.767864 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 17 03:02:45.767870 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 17 03:02:45.767876 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 17 03:02:45.767883 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 17 03:02:45.767888 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 17 03:02:45.767894 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 17 03:02:45.767899 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 17 03:02:45.767905 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 17 03:02:45.767929 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 17 03:02:45.767935 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 17 03:02:45.767940 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 17 03:02:45.767946 kernel: iommu: Default domain type: Translated Apr 17 03:02:45.767953 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 03:02:45.767959 kernel: PCI: Using ACPI for IRQ routing Apr 17 03:02:45.767964 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 03:02:45.767970 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 17 03:02:45.767975 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 17 03:02:45.768032 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 17 03:02:45.768086 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 17 03:02:45.768139 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 17 03:02:45.768148 kernel: vgaarb: loaded Apr 17 03:02:45.768154 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 17 03:02:45.768159 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 17 03:02:45.768165 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 03:02:45.768170 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 03:02:45.768176 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 03:02:45.768182 kernel: pnp: PnP ACPI init Apr 17 03:02:45.768288 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 17 03:02:45.768297 kernel: pnp: PnP ACPI: found 6 devices Apr 17 03:02:45.768304 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 03:02:45.768310 kernel: NET: Registered PF_INET protocol family Apr 17 03:02:45.768316 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 03:02:45.768322 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 17 03:02:45.768327 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 03:02:45.768333 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 03:02:45.768339 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 17 03:02:45.768345 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 17 03:02:45.768352 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 03:02:45.768357 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 03:02:45.768363 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 03:02:45.768369 kernel: NET: Registered PF_XDP protocol family Apr 17 03:02:45.768419 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 03:02:45.768467 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 03:02:45.768517 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 03:02:45.768565 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 17 03:02:45.768627 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 17 03:02:45.768680 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 17 03:02:45.768688 kernel: PCI: CLS 0 bytes, default 64 Apr 17 03:02:45.768694 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 03:02:45.768700 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 17 03:02:45.768706 kernel: Initialise system trusted keyrings Apr 17 03:02:45.768711 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 17 03:02:45.768717 kernel: Key type asymmetric registered Apr 17 03:02:45.768723 kernel: Asymmetric key parser 'x509' registered Apr 17 03:02:45.768730 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 17 03:02:45.768736 kernel: io scheduler mq-deadline registered Apr 17 03:02:45.768742 kernel: io scheduler kyber registered Apr 17 03:02:45.768748 kernel: io scheduler bfq registered Apr 17 03:02:45.768753 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 03:02:45.768760 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 17 03:02:45.768765 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 17 03:02:45.768771 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 17 03:02:45.768777 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 03:02:45.768782 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 03:02:45.768790 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 03:02:45.768796 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 03:02:45.768801 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 03:02:45.768854 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 17 03:02:45.768903 kernel: rtc_cmos 00:04: registered as rtc0 Apr 17 03:02:45.768930 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Apr 17 03:02:45.768982 kernel: rtc_cmos 00:04: setting system clock to 2026-04-17T03:02:45 UTC (1776394965) Apr 17 03:02:45.769034 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 17 03:02:45.769041 kernel: intel_pstate: CPU model not supported Apr 17 03:02:45.769047 kernel: NET: Registered PF_INET6 protocol family Apr 17 03:02:45.769052 kernel: Segment Routing with IPv6 Apr 17 03:02:45.769058 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 03:02:45.769063 kernel: NET: Registered PF_PACKET protocol family Apr 17 03:02:45.769069 kernel: Key type dns_resolver registered Apr 17 03:02:45.769074 kernel: IPI shorthand broadcast: enabled Apr 17 03:02:45.769080 kernel: sched_clock: Marking stable (2523008408, 243408776)->(2843005586, -76588402) Apr 17 03:02:45.769087 kernel: registered taskstats version 1 Apr 17 03:02:45.769093 kernel: Loading compiled-in X.509 certificates Apr 17 03:02:45.769099 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 92f69eed5a22c94634d5240e5e65306547d4ba83' Apr 17 03:02:45.769104 kernel: Demotion targets for Node 0: null Apr 17 03:02:45.769110 kernel: Key type .fscrypt registered Apr 17 03:02:45.769116 kernel: Key type fscrypt-provisioning registered Apr 17 03:02:45.769121 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 03:02:45.769127 kernel: ima: Allocated hash algorithm: sha1 Apr 17 03:02:45.769133 kernel: ima: No architecture policies found Apr 17 03:02:45.769138 kernel: clk: Disabling unused clocks Apr 17 03:02:45.769145 kernel: Warning: unable to open an initial console. Apr 17 03:02:45.769150 kernel: Freeing unused kernel image (initmem) memory: 46216K Apr 17 03:02:45.769156 kernel: Write protecting the kernel read-only data: 40960k Apr 17 03:02:45.769162 kernel: Freeing unused kernel image (rodata/data gap) memory: 548K Apr 17 03:02:45.769168 kernel: Run /init as init process Apr 17 03:02:45.769173 kernel: with arguments: Apr 17 03:02:45.769179 kernel: /init Apr 17 03:02:45.769184 kernel: with environment: Apr 17 03:02:45.769190 kernel: HOME=/ Apr 17 03:02:45.769197 kernel: TERM=linux Apr 17 03:02:45.769203 systemd[1]: Successfully made /usr/ read-only. Apr 17 03:02:45.769211 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 17 03:02:45.769218 systemd[1]: Detected virtualization kvm. Apr 17 03:02:45.769224 systemd[1]: Detected architecture x86-64. Apr 17 03:02:45.769237 systemd[1]: Running in initrd. Apr 17 03:02:45.769244 systemd[1]: No hostname configured, using default hostname. Apr 17 03:02:45.769251 systemd[1]: Hostname set to . Apr 17 03:02:45.769257 systemd[1]: Initializing machine ID from VM UUID. Apr 17 03:02:45.769263 systemd[1]: Queued start job for default target initrd.target. Apr 17 03:02:45.769269 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 03:02:45.769275 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 03:02:45.769282 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 03:02:45.769289 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 03:02:45.769296 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 03:02:45.769302 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 03:02:45.769309 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 03:02:45.769316 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 03:02:45.769322 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 03:02:45.769337 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 03:02:45.769345 systemd[1]: Reached target paths.target - Path Units. Apr 17 03:02:45.769351 systemd[1]: Reached target slices.target - Slice Units. Apr 17 03:02:45.769365 systemd[1]: Reached target swap.target - Swaps. Apr 17 03:02:45.769371 systemd[1]: Reached target timers.target - Timer Units. Apr 17 03:02:45.769377 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 03:02:45.769383 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 03:02:45.769389 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 03:02:45.769395 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 17 03:02:45.769403 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 03:02:45.769409 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 03:02:45.769415 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 03:02:45.769422 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 03:02:45.769429 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 03:02:45.769436 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 03:02:45.769444 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 03:02:45.769450 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 17 03:02:45.769456 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 03:02:45.769463 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 03:02:45.769469 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 03:02:45.769475 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 03:02:45.769481 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 03:02:45.769503 systemd-journald[200]: Collecting audit messages is disabled. Apr 17 03:02:45.769540 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 03:02:45.769547 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 03:02:45.769554 systemd-journald[200]: Journal started Apr 17 03:02:45.769570 systemd-journald[200]: Runtime Journal (/run/log/journal/16fc6521c6ce40398b3ec4ba617b07a5) is 6M, max 48.2M, 42.2M free. Apr 17 03:02:45.769559 systemd-modules-load[202]: Inserted module 'overlay' Apr 17 03:02:45.772030 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 03:02:45.773389 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 03:02:45.777635 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 03:02:45.786407 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 03:02:45.861196 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 03:02:45.861217 kernel: Bridge firewalling registered Apr 17 03:02:45.796076 systemd-modules-load[202]: Inserted module 'br_netfilter' Apr 17 03:02:45.860019 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 03:02:45.862942 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 03:02:45.865473 systemd-tmpfiles[215]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 17 03:02:45.867591 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 03:02:45.870999 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 03:02:45.872084 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 03:02:45.872393 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 03:02:45.890877 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 03:02:45.891363 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 03:02:45.892603 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 03:02:45.908149 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 03:02:45.910511 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 03:02:45.917763 systemd-resolved[234]: Positive Trust Anchors: Apr 17 03:02:45.917782 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 03:02:45.917806 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 03:02:45.919549 systemd-resolved[234]: Defaulting to hostname 'linux'. Apr 17 03:02:45.928839 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 03:02:45.932898 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 03:02:45.942398 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 17 03:02:46.015952 kernel: SCSI subsystem initialized Apr 17 03:02:46.023945 kernel: Loading iSCSI transport class v2.0-870. Apr 17 03:02:46.032945 kernel: iscsi: registered transport (tcp) Apr 17 03:02:46.050953 kernel: iscsi: registered transport (qla4xxx) Apr 17 03:02:46.050993 kernel: QLogic iSCSI HBA Driver Apr 17 03:02:46.065533 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 03:02:46.084807 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 03:02:46.087433 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 03:02:46.119563 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 03:02:46.121953 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 03:02:46.170957 kernel: raid6: avx512x4 gen() 46561 MB/s Apr 17 03:02:46.187942 kernel: raid6: avx512x2 gen() 45237 MB/s Apr 17 03:02:46.204952 kernel: raid6: avx512x1 gen() 45394 MB/s Apr 17 03:02:46.221947 kernel: raid6: avx2x4 gen() 38018 MB/s Apr 17 03:02:46.238944 kernel: raid6: avx2x2 gen() 37804 MB/s Apr 17 03:02:46.256465 kernel: raid6: avx2x1 gen() 29207 MB/s Apr 17 03:02:46.256492 kernel: raid6: using algorithm avx512x4 gen() 46561 MB/s Apr 17 03:02:46.274472 kernel: raid6: .... xor() 10290 MB/s, rmw enabled Apr 17 03:02:46.274494 kernel: raid6: using avx512x2 recovery algorithm Apr 17 03:02:46.292949 kernel: xor: automatically using best checksumming function avx Apr 17 03:02:46.414954 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 03:02:46.420723 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 03:02:46.423375 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 03:02:46.446878 systemd-udevd[455]: Using default interface naming scheme 'v255'. Apr 17 03:02:46.450107 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 03:02:46.455962 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 03:02:46.471310 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Apr 17 03:02:46.491556 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 03:02:46.495431 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 03:02:46.532585 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 03:02:46.536898 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 03:02:46.562944 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 17 03:02:46.565973 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 03:02:46.569942 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 17 03:02:46.577085 kernel: AES CTR mode by8 optimization enabled Apr 17 03:02:46.577156 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 03:02:46.582048 kernel: GPT:9289727 != 19775487 Apr 17 03:02:46.582077 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 03:02:46.582085 kernel: GPT:9289727 != 19775487 Apr 17 03:02:46.582092 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 03:02:46.582099 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 03:02:46.588706 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 03:02:46.590737 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 03:02:46.594640 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 03:02:46.599455 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 03:02:46.612976 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 17 03:02:46.618928 kernel: libata version 3.00 loaded. Apr 17 03:02:46.627116 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 17 03:02:46.716475 kernel: ahci 0000:00:1f.2: version 3.0 Apr 17 03:02:46.716648 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 17 03:02:46.716659 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 17 03:02:46.716740 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 17 03:02:46.716806 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 17 03:02:46.716871 kernel: scsi host0: ahci Apr 17 03:02:46.716974 kernel: scsi host1: ahci Apr 17 03:02:46.717045 kernel: scsi host2: ahci Apr 17 03:02:46.717105 kernel: scsi host3: ahci Apr 17 03:02:46.717168 kernel: scsi host4: ahci Apr 17 03:02:46.717230 kernel: scsi host5: ahci Apr 17 03:02:46.717291 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Apr 17 03:02:46.717299 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Apr 17 03:02:46.717306 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Apr 17 03:02:46.717313 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Apr 17 03:02:46.717320 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Apr 17 03:02:46.717327 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Apr 17 03:02:46.717343 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 03:02:46.725148 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 17 03:02:46.736854 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 03:02:46.743325 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 17 03:02:46.745323 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 17 03:02:46.750294 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 03:02:46.770183 disk-uuid[647]: Primary Header is updated. Apr 17 03:02:46.770183 disk-uuid[647]: Secondary Entries is updated. Apr 17 03:02:46.770183 disk-uuid[647]: Secondary Header is updated. Apr 17 03:02:46.775750 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 03:02:46.775768 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 03:02:46.939522 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 17 03:02:46.939583 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 17 03:02:46.939935 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 17 03:02:46.940937 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 17 03:02:46.942935 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 17 03:02:46.942950 kernel: ata3.00: LPM support broken, forcing max_power Apr 17 03:02:46.944634 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 17 03:02:46.944647 kernel: ata3.00: applying bridge limits Apr 17 03:02:46.945934 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 17 03:02:46.946931 kernel: ata3.00: LPM support broken, forcing max_power Apr 17 03:02:46.948045 kernel: ata3.00: configured for UDMA/100 Apr 17 03:02:46.948951 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 17 03:02:46.989462 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 17 03:02:46.989655 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 17 03:02:47.001994 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 17 03:02:47.265487 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 03:02:47.267611 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 03:02:47.270552 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 03:02:47.273473 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 03:02:47.274192 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 03:02:47.295219 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 03:02:47.777904 disk-uuid[648]: The operation has completed successfully. Apr 17 03:02:47.779613 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 03:02:47.800057 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 03:02:47.800144 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 03:02:47.821881 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 03:02:47.840750 sh[677]: Success Apr 17 03:02:47.855939 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 03:02:47.855980 kernel: device-mapper: uevent: version 1.0.3 Apr 17 03:02:47.855992 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 17 03:02:47.864951 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 17 03:02:47.886636 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 03:02:47.890835 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 03:02:47.906217 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 03:02:47.915029 kernel: BTRFS: device fsid d1542dca-1171-4bcf-9aae-d85dd05fe503 devid 1 transid 32 /dev/mapper/usr (253:0) scanned by mount (689) Apr 17 03:02:47.915062 kernel: BTRFS info (device dm-0): first mount of filesystem d1542dca-1171-4bcf-9aae-d85dd05fe503 Apr 17 03:02:47.915071 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 03:02:47.921569 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 17 03:02:47.921598 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 17 03:02:47.922568 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 03:02:47.923076 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 17 03:02:47.926427 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 03:02:47.927112 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 03:02:47.931562 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 03:02:47.953972 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (720) Apr 17 03:02:47.956882 kernel: BTRFS info (device vda6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 03:02:47.956955 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 03:02:47.960164 kernel: BTRFS info (device vda6): turning on async discard Apr 17 03:02:47.960198 kernel: BTRFS info (device vda6): enabling free space tree Apr 17 03:02:47.964974 kernel: BTRFS info (device vda6): last unmount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 03:02:47.965171 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 03:02:47.968026 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 03:02:48.029406 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 03:02:48.030615 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 03:02:48.039244 ignition[775]: Ignition 2.22.0 Apr 17 03:02:48.039263 ignition[775]: Stage: fetch-offline Apr 17 03:02:48.039282 ignition[775]: no configs at "/usr/lib/ignition/base.d" Apr 17 03:02:48.039288 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 03:02:48.039346 ignition[775]: parsed url from cmdline: "" Apr 17 03:02:48.039348 ignition[775]: no config URL provided Apr 17 03:02:48.039352 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 03:02:48.039356 ignition[775]: no config at "/usr/lib/ignition/user.ign" Apr 17 03:02:48.039374 ignition[775]: op(1): [started] loading QEMU firmware config module Apr 17 03:02:48.039377 ignition[775]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 17 03:02:48.048241 ignition[775]: op(1): [finished] loading QEMU firmware config module Apr 17 03:02:48.048255 ignition[775]: QEMU firmware config was not found. Ignoring... Apr 17 03:02:48.067830 systemd-networkd[864]: lo: Link UP Apr 17 03:02:48.067846 systemd-networkd[864]: lo: Gained carrier Apr 17 03:02:48.068672 systemd-networkd[864]: Enumeration completed Apr 17 03:02:48.068947 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 03:02:48.069057 systemd-networkd[864]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 03:02:48.069060 systemd-networkd[864]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 03:02:48.070082 systemd-networkd[864]: eth0: Link UP Apr 17 03:02:48.070177 systemd-networkd[864]: eth0: Gained carrier Apr 17 03:02:48.070184 systemd-networkd[864]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 03:02:48.070845 systemd[1]: Reached target network.target - Network. Apr 17 03:02:48.097966 systemd-networkd[864]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 17 03:02:48.152434 ignition[775]: parsing config with SHA512: 9be48b13f425a938e7ba8f881ee0d3bfc1c76b6e48e143e4201d1f0dc17e2fc11287426ad3b894354c4152a2d96b266e00d946297d3d67c34017a901f3d09cd4 Apr 17 03:02:48.158200 unknown[775]: fetched base config from "system" Apr 17 03:02:48.158219 unknown[775]: fetched user config from "qemu" Apr 17 03:02:48.158851 ignition[775]: fetch-offline: fetch-offline passed Apr 17 03:02:48.158883 systemd-resolved[234]: Detected conflict on linux IN A 10.0.0.7 Apr 17 03:02:48.159029 ignition[775]: Ignition finished successfully Apr 17 03:02:48.158891 systemd-resolved[234]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Apr 17 03:02:48.160952 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 03:02:48.162999 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 17 03:02:48.163591 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 03:02:48.194984 ignition[872]: Ignition 2.22.0 Apr 17 03:02:48.195002 ignition[872]: Stage: kargs Apr 17 03:02:48.195094 ignition[872]: no configs at "/usr/lib/ignition/base.d" Apr 17 03:02:48.195099 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 03:02:48.197510 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 03:02:48.195602 ignition[872]: kargs: kargs passed Apr 17 03:02:48.198956 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 03:02:48.195651 ignition[872]: Ignition finished successfully Apr 17 03:02:48.232620 ignition[880]: Ignition 2.22.0 Apr 17 03:02:48.232682 ignition[880]: Stage: disks Apr 17 03:02:48.232784 ignition[880]: no configs at "/usr/lib/ignition/base.d" Apr 17 03:02:48.232791 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 03:02:48.233326 ignition[880]: disks: disks passed Apr 17 03:02:48.233355 ignition[880]: Ignition finished successfully Apr 17 03:02:48.241338 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 03:02:48.242363 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 03:02:48.246323 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 03:02:48.254015 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 03:02:48.254084 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 03:02:48.260874 systemd[1]: Reached target basic.target - Basic System. Apr 17 03:02:48.266060 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 03:02:48.307377 systemd-fsck[890]: ROOT: clean, 15/553520 files, 52789/553472 blocks Apr 17 03:02:48.311285 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 03:02:48.312302 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 03:02:48.408978 kernel: EXT4-fs (vda9): mounted filesystem ee420a69-62b9-42f4-84c7-ea3f2d87c569 r/w with ordered data mode. Quota mode: none. Apr 17 03:02:48.409798 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 03:02:48.410309 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 03:02:48.414931 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 03:02:48.417338 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 03:02:48.419157 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 03:02:48.419186 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 03:02:48.433365 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (898) Apr 17 03:02:48.433394 kernel: BTRFS info (device vda6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 03:02:48.433402 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 03:02:48.419201 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 03:02:48.438701 kernel: BTRFS info (device vda6): turning on async discard Apr 17 03:02:48.438716 kernel: BTRFS info (device vda6): enabling free space tree Apr 17 03:02:48.424768 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 03:02:48.434309 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 03:02:48.439712 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 03:02:48.466715 initrd-setup-root[922]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 03:02:48.469686 initrd-setup-root[929]: cut: /sysroot/etc/group: No such file or directory Apr 17 03:02:48.474029 initrd-setup-root[936]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 03:02:48.478160 initrd-setup-root[943]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 03:02:48.545851 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 03:02:48.550043 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 03:02:48.550838 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 03:02:48.574958 kernel: BTRFS info (device vda6): last unmount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 03:02:48.585078 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 03:02:48.603071 ignition[1012]: INFO : Ignition 2.22.0 Apr 17 03:02:48.603071 ignition[1012]: INFO : Stage: mount Apr 17 03:02:48.605425 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 03:02:48.605425 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 03:02:48.605425 ignition[1012]: INFO : mount: mount passed Apr 17 03:02:48.605425 ignition[1012]: INFO : Ignition finished successfully Apr 17 03:02:48.611427 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 03:02:48.613661 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 03:02:48.912978 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 03:02:48.914971 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 03:02:48.937862 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1024) Apr 17 03:02:48.937903 kernel: BTRFS info (device vda6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 03:02:48.937932 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 03:02:48.942959 kernel: BTRFS info (device vda6): turning on async discard Apr 17 03:02:48.942981 kernel: BTRFS info (device vda6): enabling free space tree Apr 17 03:02:48.944242 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 03:02:48.977537 ignition[1041]: INFO : Ignition 2.22.0 Apr 17 03:02:48.977537 ignition[1041]: INFO : Stage: files Apr 17 03:02:48.979698 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 03:02:48.979698 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 03:02:48.979698 ignition[1041]: DEBUG : files: compiled without relabeling support, skipping Apr 17 03:02:48.984425 ignition[1041]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 03:02:48.984425 ignition[1041]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 03:02:48.988663 ignition[1041]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 03:02:48.988663 ignition[1041]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 03:02:48.988663 ignition[1041]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 03:02:48.988663 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 03:02:48.988663 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 03:02:48.987022 unknown[1041]: wrote ssh authorized keys file for user: core Apr 17 03:02:49.027234 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 03:02:49.068380 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 03:02:49.071146 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 17 03:02:49.071146 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 03:02:49.071146 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 03:02:49.071146 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 03:02:49.071146 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 03:02:49.071146 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 03:02:49.071146 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 03:02:49.071146 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 03:02:49.071146 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 03:02:49.071146 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 03:02:49.071146 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 03:02:49.071146 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 03:02:49.071146 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 03:02:49.071146 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 17 03:02:49.131304 systemd-networkd[864]: eth0: Gained IPv6LL Apr 17 03:02:49.380661 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 17 03:02:49.851536 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 03:02:49.851536 ignition[1041]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 17 03:02:49.856353 ignition[1041]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 03:02:49.856353 ignition[1041]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 03:02:49.856353 ignition[1041]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 17 03:02:49.856353 ignition[1041]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 17 03:02:49.856353 ignition[1041]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 17 03:02:49.856353 ignition[1041]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 17 03:02:49.856353 ignition[1041]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 17 03:02:49.856353 ignition[1041]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 17 03:02:49.878961 ignition[1041]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 17 03:02:49.878961 ignition[1041]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 17 03:02:49.878961 ignition[1041]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 17 03:02:49.878961 ignition[1041]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 17 03:02:49.878961 ignition[1041]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 03:02:49.878961 ignition[1041]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 03:02:49.878961 ignition[1041]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 03:02:49.878961 ignition[1041]: INFO : files: files passed Apr 17 03:02:49.878961 ignition[1041]: INFO : Ignition finished successfully Apr 17 03:02:49.870126 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 03:02:49.873561 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 03:02:49.876068 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 03:02:49.891135 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 03:02:49.908126 initrd-setup-root-after-ignition[1069]: grep: /sysroot/oem/oem-release: No such file or directory Apr 17 03:02:49.891210 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 03:02:49.911449 initrd-setup-root-after-ignition[1071]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 03:02:49.911449 initrd-setup-root-after-ignition[1071]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 03:02:49.900466 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 03:02:49.917207 initrd-setup-root-after-ignition[1075]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 03:02:49.902170 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 03:02:49.905898 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 03:02:49.961966 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 03:02:49.963616 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 03:02:49.967140 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 03:02:49.968535 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 03:02:49.971064 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 03:02:49.974271 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 03:02:49.999778 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 03:02:50.002308 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 03:02:50.020771 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 03:02:50.022613 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 03:02:50.025498 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 03:02:50.028220 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 03:02:50.028298 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 03:02:50.033455 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 03:02:50.033603 systemd[1]: Stopped target basic.target - Basic System. Apr 17 03:02:50.037247 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 03:02:50.038366 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 03:02:50.041004 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 03:02:50.043785 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 17 03:02:50.047969 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 03:02:50.050492 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 03:02:50.051803 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 03:02:50.054786 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 03:02:50.059543 systemd[1]: Stopped target swap.target - Swaps. Apr 17 03:02:50.061214 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 03:02:50.061307 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 03:02:50.064847 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 03:02:50.066172 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 03:02:50.068834 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 03:02:50.068976 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 03:02:50.071746 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 03:02:50.071850 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 03:02:50.078666 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 03:02:50.078767 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 03:02:50.080097 systemd[1]: Stopped target paths.target - Path Units. Apr 17 03:02:50.082681 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 03:02:50.085972 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 03:02:50.086420 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 03:02:50.089258 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 03:02:50.093728 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 03:02:50.093805 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 03:02:50.096115 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 03:02:50.096202 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 03:02:50.097283 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 03:02:50.097373 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 03:02:50.101260 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 03:02:50.101339 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 03:02:50.106158 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 03:02:50.108716 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 03:02:50.108807 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 03:02:50.122531 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 03:02:50.123782 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 03:02:50.123888 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 03:02:50.127136 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 03:02:50.127199 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 03:02:50.134285 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 03:02:50.134398 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 03:02:50.147059 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 03:02:50.149316 ignition[1097]: INFO : Ignition 2.22.0 Apr 17 03:02:50.149316 ignition[1097]: INFO : Stage: umount Apr 17 03:02:50.149316 ignition[1097]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 03:02:50.149316 ignition[1097]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 03:02:50.149316 ignition[1097]: INFO : umount: umount passed Apr 17 03:02:50.149316 ignition[1097]: INFO : Ignition finished successfully Apr 17 03:02:50.150356 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 03:02:50.150471 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 03:02:50.151195 systemd[1]: Stopped target network.target - Network. Apr 17 03:02:50.155633 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 03:02:50.155713 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 03:02:50.159383 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 03:02:50.159433 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 03:02:50.160564 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 03:02:50.160619 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 03:02:50.163430 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 03:02:50.163478 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 03:02:50.165970 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 03:02:50.169979 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 03:02:50.174008 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 03:02:50.174108 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 03:02:50.180073 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 17 03:02:50.180275 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 03:02:50.180372 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 03:02:50.185628 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 17 03:02:50.186140 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 17 03:02:50.187232 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 03:02:50.187269 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 03:02:50.191515 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 03:02:50.194188 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 03:02:50.194231 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 03:02:50.197154 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 03:02:50.197190 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 03:02:50.201390 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 03:02:50.201423 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 03:02:50.202628 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 03:02:50.202676 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 03:02:50.208272 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 03:02:50.210507 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 17 03:02:50.210546 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 17 03:02:50.210769 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 03:02:50.210840 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 03:02:50.217882 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 03:02:50.217999 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 03:02:50.233579 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 03:02:50.233760 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 03:02:50.236276 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 03:02:50.236305 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 03:02:50.238798 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 03:02:50.238823 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 03:02:50.241328 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 03:02:50.241360 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 03:02:50.245139 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 03:02:50.245174 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 03:02:50.247607 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 03:02:50.247633 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 03:02:50.251402 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 03:02:50.253463 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 17 03:02:50.253502 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 03:02:50.256390 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 03:02:50.256418 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 03:02:50.260494 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 03:02:50.260521 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 03:02:50.265500 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Apr 17 03:02:50.265545 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 17 03:02:50.265576 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 17 03:02:50.265818 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 03:02:50.273093 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 03:02:50.278635 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 03:02:50.278727 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 03:02:50.280865 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 03:02:50.284791 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 03:02:50.304149 systemd[1]: Switching root. Apr 17 03:02:50.332075 systemd-journald[200]: Journal stopped Apr 17 03:02:51.015150 systemd-journald[200]: Received SIGTERM from PID 1 (systemd). Apr 17 03:02:51.015195 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 03:02:51.015208 kernel: SELinux: policy capability open_perms=1 Apr 17 03:02:51.015216 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 03:02:51.015223 kernel: SELinux: policy capability always_check_network=0 Apr 17 03:02:51.015231 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 03:02:51.015242 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 03:02:51.015253 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 03:02:51.015263 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 03:02:51.015271 kernel: SELinux: policy capability userspace_initial_context=0 Apr 17 03:02:51.015280 kernel: audit: type=1403 audit(1776394970.446:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 03:02:51.015288 systemd[1]: Successfully loaded SELinux policy in 42.222ms. Apr 17 03:02:51.015302 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.170ms. Apr 17 03:02:51.015311 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 17 03:02:51.015319 systemd[1]: Detected virtualization kvm. Apr 17 03:02:51.015327 systemd[1]: Detected architecture x86-64. Apr 17 03:02:51.015335 systemd[1]: Detected first boot. Apr 17 03:02:51.015342 systemd[1]: Initializing machine ID from VM UUID. Apr 17 03:02:51.015350 zram_generator::config[1142]: No configuration found. Apr 17 03:02:51.015361 kernel: Guest personality initialized and is inactive Apr 17 03:02:51.015368 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 17 03:02:51.015375 kernel: Initialized host personality Apr 17 03:02:51.015382 kernel: NET: Registered PF_VSOCK protocol family Apr 17 03:02:51.015390 systemd[1]: Populated /etc with preset unit settings. Apr 17 03:02:51.015399 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 17 03:02:51.015406 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 17 03:02:51.015415 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 17 03:02:51.015424 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 17 03:02:51.015432 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 03:02:51.015440 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 03:02:51.015448 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 03:02:51.015455 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 03:02:51.015463 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 03:02:51.015471 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 03:02:51.015479 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 03:02:51.015486 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 03:02:51.015495 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 03:02:51.015503 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 03:02:51.015511 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 03:02:51.015518 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 03:02:51.015526 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 03:02:51.015534 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 03:02:51.015542 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 03:02:51.015551 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 03:02:51.015559 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 03:02:51.015566 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 17 03:02:51.015574 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 17 03:02:51.015583 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 17 03:02:51.015591 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 03:02:51.015599 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 03:02:51.015607 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 03:02:51.015614 systemd[1]: Reached target slices.target - Slice Units. Apr 17 03:02:51.015623 systemd[1]: Reached target swap.target - Swaps. Apr 17 03:02:51.015631 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 03:02:51.015638 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 03:02:51.015663 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 17 03:02:51.015671 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 03:02:51.015679 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 03:02:51.015687 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 03:02:51.015695 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 03:02:51.015702 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 03:02:51.015710 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 03:02:51.015720 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 03:02:51.015728 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 03:02:51.015735 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 03:02:51.015743 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 03:02:51.015750 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 03:02:51.015758 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 03:02:51.015766 systemd[1]: Reached target machines.target - Containers. Apr 17 03:02:51.015776 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 03:02:51.015786 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 03:02:51.015794 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 03:02:51.015802 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 03:02:51.015810 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 03:02:51.015820 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 03:02:51.015828 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 03:02:51.015835 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 03:02:51.015843 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 03:02:51.015853 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 03:02:51.015861 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 17 03:02:51.015868 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 17 03:02:51.015877 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 17 03:02:51.015885 systemd[1]: Stopped systemd-fsck-usr.service. Apr 17 03:02:51.015893 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 17 03:02:51.015901 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 03:02:51.015984 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 03:02:51.015994 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 03:02:51.016004 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 03:02:51.016011 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 17 03:02:51.016019 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 03:02:51.016027 systemd[1]: verity-setup.service: Deactivated successfully. Apr 17 03:02:51.016035 systemd[1]: Stopped verity-setup.service. Apr 17 03:02:51.016044 kernel: loop: module loaded Apr 17 03:02:51.016052 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 03:02:51.016060 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 03:02:51.016068 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 03:02:51.016076 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 03:02:51.016083 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 03:02:51.016108 systemd-journald[1198]: Collecting audit messages is disabled. Apr 17 03:02:51.016125 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 03:02:51.016134 systemd-journald[1198]: Journal started Apr 17 03:02:51.016151 systemd-journald[1198]: Runtime Journal (/run/log/journal/16fc6521c6ce40398b3ec4ba617b07a5) is 6M, max 48.2M, 42.2M free. Apr 17 03:02:50.784758 systemd[1]: Queued start job for default target multi-user.target. Apr 17 03:02:50.797812 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 17 03:02:50.798196 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 17 03:02:51.020138 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 03:02:51.021224 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 03:02:51.022742 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 03:02:51.024492 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 03:02:51.024622 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 03:02:51.026398 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 03:02:51.026519 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 03:02:51.028328 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 03:02:51.028522 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 03:02:51.030515 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 03:02:51.030970 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 03:02:51.033454 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 03:02:51.035232 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 03:02:51.037733 kernel: fuse: init (API version 7.41) Apr 17 03:02:51.037677 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 03:02:51.039553 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 17 03:02:51.041481 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 03:02:51.041607 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 03:02:51.049255 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 03:02:51.053015 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 03:02:51.055298 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 03:02:51.057005 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 03:02:51.057037 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 03:02:51.059385 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 17 03:02:51.062988 kernel: ACPI: bus type drm_connector registered Apr 17 03:02:51.068669 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 03:02:51.070289 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 03:02:51.071095 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 03:02:51.073107 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 03:02:51.074720 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 03:02:51.078020 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 03:02:51.079503 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 03:02:51.080180 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 03:02:51.080486 systemd-journald[1198]: Time spent on flushing to /var/log/journal/16fc6521c6ce40398b3ec4ba617b07a5 is 9.985ms for 980 entries. Apr 17 03:02:51.080486 systemd-journald[1198]: System Journal (/var/log/journal/16fc6521c6ce40398b3ec4ba617b07a5) is 8M, max 195.6M, 187.6M free. Apr 17 03:02:51.099739 systemd-journald[1198]: Received client request to flush runtime journal. Apr 17 03:02:51.099784 kernel: loop0: detected capacity change from 0 to 219192 Apr 17 03:02:51.084222 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 03:02:51.087507 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 03:02:51.089350 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 03:02:51.089460 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 03:02:51.091883 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 03:02:51.093704 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 03:02:51.096416 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 03:02:51.098524 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 03:02:51.100595 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 03:02:51.107023 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 03:02:51.110091 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 03:02:51.114138 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 17 03:02:51.118317 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 03:02:51.118258 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 03:02:51.133091 kernel: loop1: detected capacity change from 0 to 110984 Apr 17 03:02:51.138574 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 17 03:02:51.142356 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 03:02:51.145208 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 03:02:51.156160 kernel: loop2: detected capacity change from 0 to 128560 Apr 17 03:02:51.167507 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Apr 17 03:02:51.167520 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Apr 17 03:02:51.171148 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 03:02:51.179929 kernel: loop3: detected capacity change from 0 to 219192 Apr 17 03:02:51.190925 kernel: loop4: detected capacity change from 0 to 110984 Apr 17 03:02:51.201935 kernel: loop5: detected capacity change from 0 to 128560 Apr 17 03:02:51.210035 (sd-merge)[1285]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 17 03:02:51.210358 (sd-merge)[1285]: Merged extensions into '/usr'. Apr 17 03:02:51.214562 systemd[1]: Reload requested from client PID 1259 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 03:02:51.214584 systemd[1]: Reloading... Apr 17 03:02:51.256263 zram_generator::config[1310]: No configuration found. Apr 17 03:02:51.323308 ldconfig[1254]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 03:02:51.382816 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 03:02:51.382893 systemd[1]: Reloading finished in 168 ms. Apr 17 03:02:51.408246 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 03:02:51.410112 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 03:02:51.425144 systemd[1]: Starting ensure-sysext.service... Apr 17 03:02:51.427269 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 03:02:51.437616 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 03:02:51.440727 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 03:02:51.444223 systemd[1]: Reload requested from client PID 1349 ('systemctl') (unit ensure-sysext.service)... Apr 17 03:02:51.444243 systemd[1]: Reloading... Apr 17 03:02:51.444504 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 17 03:02:51.444539 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 17 03:02:51.444699 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 03:02:51.444865 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 03:02:51.445352 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 03:02:51.445511 systemd-tmpfiles[1350]: ACLs are not supported, ignoring. Apr 17 03:02:51.445556 systemd-tmpfiles[1350]: ACLs are not supported, ignoring. Apr 17 03:02:51.447734 systemd-tmpfiles[1350]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 03:02:51.447745 systemd-tmpfiles[1350]: Skipping /boot Apr 17 03:02:51.452208 systemd-tmpfiles[1350]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 03:02:51.452225 systemd-tmpfiles[1350]: Skipping /boot Apr 17 03:02:51.477983 zram_generator::config[1378]: No configuration found. Apr 17 03:02:51.478557 systemd-udevd[1353]: Using default interface naming scheme 'v255'. Apr 17 03:02:51.575966 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 03:02:51.591967 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Apr 17 03:02:51.599450 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 17 03:02:51.599641 kernel: ACPI: button: Power Button [PWRF] Apr 17 03:02:51.599673 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 17 03:02:51.656317 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 03:02:51.658388 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 17 03:02:51.658826 systemd[1]: Reloading finished in 214 ms. Apr 17 03:02:51.676015 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 03:02:51.678778 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 03:02:51.749036 systemd[1]: Finished ensure-sysext.service. Apr 17 03:02:51.753731 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 17 03:02:51.756152 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 03:02:51.758726 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 03:02:51.759424 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 03:02:51.763093 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 03:02:51.766513 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 03:02:51.770080 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 03:02:51.772689 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 03:02:51.773542 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 03:02:51.776040 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 17 03:02:51.778045 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 03:02:51.783046 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 03:02:51.788044 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 03:02:51.793380 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 17 03:02:51.798030 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 03:02:51.801544 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 03:02:51.804680 augenrules[1501]: No rules Apr 17 03:02:51.806785 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 03:02:51.808414 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 17 03:02:51.812279 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 03:02:51.815103 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 03:02:51.815242 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 03:02:51.815469 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 03:02:51.815590 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 03:02:51.816006 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 03:02:51.816248 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 03:02:51.816481 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 03:02:51.816619 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 03:02:51.816988 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 03:02:51.817431 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 03:02:51.823261 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 03:02:51.823332 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 03:02:51.823371 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 03:02:51.824362 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 03:02:51.827433 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 03:02:51.827528 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 03:02:51.827549 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 03:02:51.832143 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 03:02:51.841598 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 03:02:51.855366 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 03:02:51.911807 systemd-resolved[1486]: Positive Trust Anchors: Apr 17 03:02:51.911829 systemd-resolved[1486]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 03:02:51.911854 systemd-resolved[1486]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 03:02:51.913115 systemd-networkd[1483]: lo: Link UP Apr 17 03:02:51.913131 systemd-networkd[1483]: lo: Gained carrier Apr 17 03:02:51.914001 systemd-networkd[1483]: Enumeration completed Apr 17 03:02:51.914336 systemd-networkd[1483]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 03:02:51.914349 systemd-networkd[1483]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 03:02:51.914632 systemd-networkd[1483]: eth0: Link UP Apr 17 03:02:51.914728 systemd-resolved[1486]: Defaulting to hostname 'linux'. Apr 17 03:02:51.914754 systemd-networkd[1483]: eth0: Gained carrier Apr 17 03:02:51.914765 systemd-networkd[1483]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 03:02:51.931004 systemd-networkd[1483]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 17 03:02:51.931475 systemd-timesyncd[1492]: Network configuration changed, trying to establish connection. Apr 17 03:02:51.932532 systemd-timesyncd[1492]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 17 03:02:51.932587 systemd-timesyncd[1492]: Initial clock synchronization to Fri 2026-04-17 03:02:52.134301 UTC. Apr 17 03:02:51.935280 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 17 03:02:51.937383 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 03:02:51.938892 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 03:02:51.941012 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 03:02:51.943131 systemd[1]: Reached target network.target - Network. Apr 17 03:02:51.944353 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 03:02:51.945943 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 03:02:51.948895 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 03:02:51.950675 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 03:02:51.952301 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 17 03:02:51.954094 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 03:02:51.955840 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 03:02:51.955878 systemd[1]: Reached target paths.target - Path Units. Apr 17 03:02:51.958646 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 03:02:51.960690 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 03:02:51.962273 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 03:02:51.963936 systemd[1]: Reached target timers.target - Timer Units. Apr 17 03:02:51.966020 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 03:02:51.968562 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 03:02:51.971294 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 17 03:02:51.973317 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 17 03:02:51.975000 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 17 03:02:51.981373 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 03:02:51.983248 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 17 03:02:51.985747 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 17 03:02:51.988052 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 03:02:51.990343 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 03:02:51.992499 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 03:02:51.993887 systemd[1]: Reached target basic.target - Basic System. Apr 17 03:02:51.995430 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 03:02:51.995447 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 03:02:51.996194 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 03:02:51.998333 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 03:02:52.005677 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 03:02:52.009206 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 03:02:52.011765 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 03:02:52.013308 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 03:02:52.014105 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 17 03:02:52.016244 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 03:02:52.018655 jq[1540]: false Apr 17 03:02:52.019043 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 03:02:52.021411 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 03:02:52.026150 google_oslogin_nss_cache[1542]: oslogin_cache_refresh[1542]: Refreshing passwd entry cache Apr 17 03:02:52.025106 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 03:02:52.025021 oslogin_cache_refresh[1542]: Refreshing passwd entry cache Apr 17 03:02:52.028538 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 03:02:52.029239 extend-filesystems[1541]: Found /dev/vda6 Apr 17 03:02:52.030603 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 03:02:52.031023 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 03:02:52.033929 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 03:02:52.034873 oslogin_cache_refresh[1542]: Failure getting users, quitting Apr 17 03:02:52.035076 google_oslogin_nss_cache[1542]: oslogin_cache_refresh[1542]: Failure getting users, quitting Apr 17 03:02:52.035076 google_oslogin_nss_cache[1542]: oslogin_cache_refresh[1542]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 17 03:02:52.035076 google_oslogin_nss_cache[1542]: oslogin_cache_refresh[1542]: Refreshing group entry cache Apr 17 03:02:52.034888 oslogin_cache_refresh[1542]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 17 03:02:52.034925 oslogin_cache_refresh[1542]: Refreshing group entry cache Apr 17 03:02:52.035223 extend-filesystems[1541]: Found /dev/vda9 Apr 17 03:02:52.038125 extend-filesystems[1541]: Checking size of /dev/vda9 Apr 17 03:02:52.037797 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 03:02:52.040526 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 17 03:02:52.043038 google_oslogin_nss_cache[1542]: oslogin_cache_refresh[1542]: Failure getting groups, quitting Apr 17 03:02:52.043038 google_oslogin_nss_cache[1542]: oslogin_cache_refresh[1542]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 17 03:02:52.042775 oslogin_cache_refresh[1542]: Failure getting groups, quitting Apr 17 03:02:52.042784 oslogin_cache_refresh[1542]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 17 03:02:52.046226 extend-filesystems[1541]: Resized partition /dev/vda9 Apr 17 03:02:52.050404 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 03:02:52.052776 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 03:02:52.052853 jq[1562]: true Apr 17 03:02:52.052983 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 03:02:52.053206 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 17 03:02:52.053354 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 17 03:02:52.054170 extend-filesystems[1568]: resize2fs 1.47.3 (8-Jul-2025) Apr 17 03:02:52.057721 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 03:02:52.058015 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 03:02:52.061572 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 17 03:02:52.062013 update_engine[1554]: I20260417 03:02:52.061859 1554 main.cc:92] Flatcar Update Engine starting Apr 17 03:02:52.064169 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 03:02:52.064355 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 03:02:52.081782 tar[1569]: linux-amd64/LICENSE Apr 17 03:02:52.083974 tar[1569]: linux-amd64/helm Apr 17 03:02:52.083176 (ntainerd)[1572]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 03:02:52.085961 jq[1570]: true Apr 17 03:02:52.094808 systemd-logind[1551]: Watching system buttons on /dev/input/event2 (Power Button) Apr 17 03:02:52.095952 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 17 03:02:52.096020 systemd-logind[1551]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 03:02:52.096153 systemd-logind[1551]: New seat seat0. Apr 17 03:02:52.096549 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 03:02:52.109951 extend-filesystems[1568]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 17 03:02:52.109951 extend-filesystems[1568]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 17 03:02:52.109951 extend-filesystems[1568]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 17 03:02:52.119326 extend-filesystems[1541]: Resized filesystem in /dev/vda9 Apr 17 03:02:52.116217 dbus-daemon[1538]: [system] SELinux support is enabled Apr 17 03:02:52.110645 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 03:02:52.111253 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 03:02:52.116330 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 03:02:52.122110 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 03:02:52.122957 dbus-daemon[1538]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 17 03:02:52.125028 update_engine[1554]: I20260417 03:02:52.122581 1554 update_check_scheduler.cc:74] Next update check in 5m16s Apr 17 03:02:52.122136 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 03:02:52.124279 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 03:02:52.124294 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 03:02:52.126235 systemd[1]: Started update-engine.service - Update Engine. Apr 17 03:02:52.129688 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 03:02:52.131594 bash[1600]: Updated "/home/core/.ssh/authorized_keys" Apr 17 03:02:52.133096 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 03:02:52.135331 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 17 03:02:52.166784 locksmithd[1602]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 03:02:52.198542 sshd_keygen[1567]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 03:02:52.217131 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 03:02:52.220503 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 03:02:52.222628 containerd[1572]: time="2026-04-17T03:02:52Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 17 03:02:52.223266 containerd[1572]: time="2026-04-17T03:02:52.223223098Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 17 03:02:52.230374 containerd[1572]: time="2026-04-17T03:02:52.230255044Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.702µs" Apr 17 03:02:52.230374 containerd[1572]: time="2026-04-17T03:02:52.230312637Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 17 03:02:52.230374 containerd[1572]: time="2026-04-17T03:02:52.230337032Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 17 03:02:52.230503 containerd[1572]: time="2026-04-17T03:02:52.230483190Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 17 03:02:52.230503 containerd[1572]: time="2026-04-17T03:02:52.230505627Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 17 03:02:52.230560 containerd[1572]: time="2026-04-17T03:02:52.230527765Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 17 03:02:52.230577 containerd[1572]: time="2026-04-17T03:02:52.230568393Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 17 03:02:52.230593 containerd[1572]: time="2026-04-17T03:02:52.230579727Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 17 03:02:52.230811 containerd[1572]: time="2026-04-17T03:02:52.230753866Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 17 03:02:52.230811 containerd[1572]: time="2026-04-17T03:02:52.230777243Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 17 03:02:52.230811 containerd[1572]: time="2026-04-17T03:02:52.230789322Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 17 03:02:52.230811 containerd[1572]: time="2026-04-17T03:02:52.230798284Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 17 03:02:52.230894 containerd[1572]: time="2026-04-17T03:02:52.230877460Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 17 03:02:52.231150 containerd[1572]: time="2026-04-17T03:02:52.231128202Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 17 03:02:52.231185 containerd[1572]: time="2026-04-17T03:02:52.231166392Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 17 03:02:52.231207 containerd[1572]: time="2026-04-17T03:02:52.231184408Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 17 03:02:52.231220 containerd[1572]: time="2026-04-17T03:02:52.231203804Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 17 03:02:52.232519 containerd[1572]: time="2026-04-17T03:02:52.232161343Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 17 03:02:52.232519 containerd[1572]: time="2026-04-17T03:02:52.232214244Z" level=info msg="metadata content store policy set" policy=shared Apr 17 03:02:52.237444 containerd[1572]: time="2026-04-17T03:02:52.237402121Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 17 03:02:52.237508 containerd[1572]: time="2026-04-17T03:02:52.237466715Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 17 03:02:52.237508 containerd[1572]: time="2026-04-17T03:02:52.237501207Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 17 03:02:52.237538 containerd[1572]: time="2026-04-17T03:02:52.237511902Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 17 03:02:52.237538 containerd[1572]: time="2026-04-17T03:02:52.237524096Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 17 03:02:52.237538 containerd[1572]: time="2026-04-17T03:02:52.237532156Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 17 03:02:52.237588 containerd[1572]: time="2026-04-17T03:02:52.237543744Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 17 03:02:52.237588 containerd[1572]: time="2026-04-17T03:02:52.237553209Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 17 03:02:52.237588 containerd[1572]: time="2026-04-17T03:02:52.237561576Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 17 03:02:52.237588 containerd[1572]: time="2026-04-17T03:02:52.237569882Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 17 03:02:52.237588 containerd[1572]: time="2026-04-17T03:02:52.237577499Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 17 03:02:52.237649 containerd[1572]: time="2026-04-17T03:02:52.237589100Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 17 03:02:52.237692 containerd[1572]: time="2026-04-17T03:02:52.237673228Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 17 03:02:52.237711 containerd[1572]: time="2026-04-17T03:02:52.237698669Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 17 03:02:52.237725 containerd[1572]: time="2026-04-17T03:02:52.237713663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 17 03:02:52.237738 containerd[1572]: time="2026-04-17T03:02:52.237726971Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 17 03:02:52.237752 containerd[1572]: time="2026-04-17T03:02:52.237736262Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 17 03:02:52.237752 containerd[1572]: time="2026-04-17T03:02:52.237744529Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 17 03:02:52.237780 containerd[1572]: time="2026-04-17T03:02:52.237753936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 17 03:02:52.237780 containerd[1572]: time="2026-04-17T03:02:52.237761389Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 17 03:02:52.237780 containerd[1572]: time="2026-04-17T03:02:52.237769791Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 17 03:02:52.237780 containerd[1572]: time="2026-04-17T03:02:52.237777920Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 17 03:02:52.237835 containerd[1572]: time="2026-04-17T03:02:52.237784897Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 17 03:02:52.237835 containerd[1572]: time="2026-04-17T03:02:52.237824311Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 17 03:02:52.237863 containerd[1572]: time="2026-04-17T03:02:52.237834446Z" level=info msg="Start snapshots syncer" Apr 17 03:02:52.237863 containerd[1572]: time="2026-04-17T03:02:52.237847995Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 17 03:02:52.238119 containerd[1572]: time="2026-04-17T03:02:52.238075420Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 17 03:02:52.238117 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 03:02:52.238255 containerd[1572]: time="2026-04-17T03:02:52.238126948Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 17 03:02:52.238271 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 03:02:52.240715 containerd[1572]: time="2026-04-17T03:02:52.240692851Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 17 03:02:52.240854 containerd[1572]: time="2026-04-17T03:02:52.240842170Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 17 03:02:52.240911 containerd[1572]: time="2026-04-17T03:02:52.240903859Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 17 03:02:52.240995 containerd[1572]: time="2026-04-17T03:02:52.240986634Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 17 03:02:52.241038 containerd[1572]: time="2026-04-17T03:02:52.241031215Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 17 03:02:52.241071 containerd[1572]: time="2026-04-17T03:02:52.241064822Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 17 03:02:52.241099 containerd[1572]: time="2026-04-17T03:02:52.241093578Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 17 03:02:52.241126 containerd[1572]: time="2026-04-17T03:02:52.241121262Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 17 03:02:52.241165 containerd[1572]: time="2026-04-17T03:02:52.241158957Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 17 03:02:52.241193 containerd[1572]: time="2026-04-17T03:02:52.241187496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 17 03:02:52.241204 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 03:02:52.242748 containerd[1572]: time="2026-04-17T03:02:52.241261817Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 17 03:02:52.242828 containerd[1572]: time="2026-04-17T03:02:52.242818578Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 17 03:02:52.242867 containerd[1572]: time="2026-04-17T03:02:52.242859235Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 17 03:02:52.242894 containerd[1572]: time="2026-04-17T03:02:52.242888076Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 17 03:02:52.242953 containerd[1572]: time="2026-04-17T03:02:52.242945367Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 17 03:02:52.242990 containerd[1572]: time="2026-04-17T03:02:52.242983787Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 17 03:02:52.243017 containerd[1572]: time="2026-04-17T03:02:52.243011616Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 17 03:02:52.243051 containerd[1572]: time="2026-04-17T03:02:52.243045268Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 17 03:02:52.243084 containerd[1572]: time="2026-04-17T03:02:52.243078457Z" level=info msg="runtime interface created" Apr 17 03:02:52.243106 containerd[1572]: time="2026-04-17T03:02:52.243101792Z" level=info msg="created NRI interface" Apr 17 03:02:52.243132 containerd[1572]: time="2026-04-17T03:02:52.243126792Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 17 03:02:52.243161 containerd[1572]: time="2026-04-17T03:02:52.243156832Z" level=info msg="Connect containerd service" Apr 17 03:02:52.243203 containerd[1572]: time="2026-04-17T03:02:52.243197039Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 03:02:52.244050 containerd[1572]: time="2026-04-17T03:02:52.244031401Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 03:02:52.256100 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 03:02:52.260258 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 03:02:52.263685 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 03:02:52.265593 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 03:02:52.310443 containerd[1572]: time="2026-04-17T03:02:52.310383799Z" level=info msg="Start subscribing containerd event" Apr 17 03:02:52.310533 containerd[1572]: time="2026-04-17T03:02:52.310487573Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 03:02:52.310735 containerd[1572]: time="2026-04-17T03:02:52.310493961Z" level=info msg="Start recovering state" Apr 17 03:02:52.310794 containerd[1572]: time="2026-04-17T03:02:52.310549899Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 03:02:52.312143 containerd[1572]: time="2026-04-17T03:02:52.312109762Z" level=info msg="Start event monitor" Apr 17 03:02:52.312143 containerd[1572]: time="2026-04-17T03:02:52.312141795Z" level=info msg="Start cni network conf syncer for default" Apr 17 03:02:52.312181 containerd[1572]: time="2026-04-17T03:02:52.312151776Z" level=info msg="Start streaming server" Apr 17 03:02:52.312181 containerd[1572]: time="2026-04-17T03:02:52.312161071Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 17 03:02:52.312181 containerd[1572]: time="2026-04-17T03:02:52.312166921Z" level=info msg="runtime interface starting up..." Apr 17 03:02:52.312181 containerd[1572]: time="2026-04-17T03:02:52.312171633Z" level=info msg="starting plugins..." Apr 17 03:02:52.312243 containerd[1572]: time="2026-04-17T03:02:52.312184282Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 17 03:02:52.312499 containerd[1572]: time="2026-04-17T03:02:52.312301879Z" level=info msg="containerd successfully booted in 0.089959s" Apr 17 03:02:52.312404 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 03:02:52.370038 tar[1569]: linux-amd64/README.md Apr 17 03:02:52.392497 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 03:02:53.867761 systemd-networkd[1483]: eth0: Gained IPv6LL Apr 17 03:02:53.870011 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 03:02:53.872168 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 03:02:53.874814 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 17 03:02:53.877327 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 03:02:53.887282 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 03:02:53.900152 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 17 03:02:53.900317 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 17 03:02:53.902428 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 03:02:53.904825 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 03:02:54.483345 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 03:02:54.485476 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 03:02:54.487812 (kubelet)[1671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 03:02:54.488006 systemd[1]: Startup finished in 2.577s (kernel) + 4.837s (initrd) + 4.081s (userspace) = 11.496s. Apr 17 03:02:54.831011 kubelet[1671]: E0417 03:02:54.830790 1671 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 03:02:54.833029 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 03:02:54.833141 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 03:02:54.833382 systemd[1]: kubelet.service: Consumed 768ms CPU time, 256.5M memory peak. Apr 17 03:02:58.302020 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 03:02:58.303013 systemd[1]: Started sshd@0-10.0.0.7:22-10.0.0.1:59470.service - OpenSSH per-connection server daemon (10.0.0.1:59470). Apr 17 03:02:58.364365 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 59470 ssh2: RSA SHA256:FVrkeUr4F1DUvuGbghPLjRpHgCWbfVIbP6ixe+jkRU8 Apr 17 03:02:58.365720 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 03:02:58.370517 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 03:02:58.371220 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 03:02:58.375644 systemd-logind[1551]: New session 1 of user core. Apr 17 03:02:58.394437 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 03:02:58.396311 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 03:02:58.401387 (systemd)[1689]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 03:02:58.403353 systemd-logind[1551]: New session c1 of user core. Apr 17 03:02:58.498608 systemd[1689]: Queued start job for default target default.target. Apr 17 03:02:58.510902 systemd[1689]: Created slice app.slice - User Application Slice. Apr 17 03:02:58.510970 systemd[1689]: Reached target paths.target - Paths. Apr 17 03:02:58.511001 systemd[1689]: Reached target timers.target - Timers. Apr 17 03:02:58.511884 systemd[1689]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 03:02:58.520699 systemd[1689]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 03:02:58.520754 systemd[1689]: Reached target sockets.target - Sockets. Apr 17 03:02:58.520809 systemd[1689]: Reached target basic.target - Basic System. Apr 17 03:02:58.520832 systemd[1689]: Reached target default.target - Main User Target. Apr 17 03:02:58.520849 systemd[1689]: Startup finished in 112ms. Apr 17 03:02:58.520978 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 03:02:58.522211 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 03:02:58.531044 systemd[1]: Started sshd@1-10.0.0.7:22-10.0.0.1:59482.service - OpenSSH per-connection server daemon (10.0.0.1:59482). Apr 17 03:02:58.567886 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 59482 ssh2: RSA SHA256:FVrkeUr4F1DUvuGbghPLjRpHgCWbfVIbP6ixe+jkRU8 Apr 17 03:02:58.568830 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 03:02:58.572363 systemd-logind[1551]: New session 2 of user core. Apr 17 03:02:58.582059 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 03:02:58.591367 sshd[1703]: Connection closed by 10.0.0.1 port 59482 Apr 17 03:02:58.591749 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Apr 17 03:02:58.605313 systemd[1]: sshd@1-10.0.0.7:22-10.0.0.1:59482.service: Deactivated successfully. Apr 17 03:02:58.606398 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 03:02:58.607025 systemd-logind[1551]: Session 2 logged out. Waiting for processes to exit. Apr 17 03:02:58.608608 systemd[1]: Started sshd@2-10.0.0.7:22-10.0.0.1:59486.service - OpenSSH per-connection server daemon (10.0.0.1:59486). Apr 17 03:02:58.609026 systemd-logind[1551]: Removed session 2. Apr 17 03:02:58.647362 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 59486 ssh2: RSA SHA256:FVrkeUr4F1DUvuGbghPLjRpHgCWbfVIbP6ixe+jkRU8 Apr 17 03:02:58.648376 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 03:02:58.651853 systemd-logind[1551]: New session 3 of user core. Apr 17 03:02:58.662188 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 03:02:58.668313 sshd[1713]: Connection closed by 10.0.0.1 port 59486 Apr 17 03:02:58.668618 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Apr 17 03:02:58.677256 systemd[1]: sshd@2-10.0.0.7:22-10.0.0.1:59486.service: Deactivated successfully. Apr 17 03:02:58.678330 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 03:02:58.678882 systemd-logind[1551]: Session 3 logged out. Waiting for processes to exit. Apr 17 03:02:58.680476 systemd[1]: Started sshd@3-10.0.0.7:22-10.0.0.1:59500.service - OpenSSH per-connection server daemon (10.0.0.1:59500). Apr 17 03:02:58.680970 systemd-logind[1551]: Removed session 3. Apr 17 03:02:58.728228 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 59500 ssh2: RSA SHA256:FVrkeUr4F1DUvuGbghPLjRpHgCWbfVIbP6ixe+jkRU8 Apr 17 03:02:58.729150 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 03:02:58.732605 systemd-logind[1551]: New session 4 of user core. Apr 17 03:02:58.742061 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 03:02:58.750819 sshd[1722]: Connection closed by 10.0.0.1 port 59500 Apr 17 03:02:58.751155 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Apr 17 03:02:58.764406 systemd[1]: sshd@3-10.0.0.7:22-10.0.0.1:59500.service: Deactivated successfully. Apr 17 03:02:58.765488 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 03:02:58.765985 systemd-logind[1551]: Session 4 logged out. Waiting for processes to exit. Apr 17 03:02:58.767584 systemd[1]: Started sshd@4-10.0.0.7:22-10.0.0.1:59502.service - OpenSSH per-connection server daemon (10.0.0.1:59502). Apr 17 03:02:58.767999 systemd-logind[1551]: Removed session 4. Apr 17 03:02:58.808883 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 59502 ssh2: RSA SHA256:FVrkeUr4F1DUvuGbghPLjRpHgCWbfVIbP6ixe+jkRU8 Apr 17 03:02:58.809878 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 03:02:58.813448 systemd-logind[1551]: New session 5 of user core. Apr 17 03:02:58.827211 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 03:02:58.840149 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 03:02:58.840333 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 03:02:58.855760 sudo[1732]: pam_unix(sudo:session): session closed for user root Apr 17 03:02:58.857033 sshd[1731]: Connection closed by 10.0.0.1 port 59502 Apr 17 03:02:58.857390 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Apr 17 03:02:58.867968 systemd[1]: sshd@4-10.0.0.7:22-10.0.0.1:59502.service: Deactivated successfully. Apr 17 03:02:58.869063 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 03:02:58.869601 systemd-logind[1551]: Session 5 logged out. Waiting for processes to exit. Apr 17 03:02:58.871190 systemd[1]: Started sshd@5-10.0.0.7:22-10.0.0.1:59508.service - OpenSSH per-connection server daemon (10.0.0.1:59508). Apr 17 03:02:58.871857 systemd-logind[1551]: Removed session 5. Apr 17 03:02:58.910604 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 59508 ssh2: RSA SHA256:FVrkeUr4F1DUvuGbghPLjRpHgCWbfVIbP6ixe+jkRU8 Apr 17 03:02:58.911685 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 03:02:58.915637 systemd-logind[1551]: New session 6 of user core. Apr 17 03:02:58.925414 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 03:02:58.935076 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 03:02:58.935262 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 03:02:58.938257 sudo[1743]: pam_unix(sudo:session): session closed for user root Apr 17 03:02:58.942666 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 17 03:02:58.942895 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 03:02:58.950320 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 17 03:02:58.984250 augenrules[1765]: No rules Apr 17 03:02:58.985234 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 03:02:58.985414 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 17 03:02:58.986182 sudo[1742]: pam_unix(sudo:session): session closed for user root Apr 17 03:02:58.987193 sshd[1741]: Connection closed by 10.0.0.1 port 59508 Apr 17 03:02:58.987469 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Apr 17 03:02:58.996406 systemd[1]: sshd@5-10.0.0.7:22-10.0.0.1:59508.service: Deactivated successfully. Apr 17 03:02:58.997480 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 03:02:58.998035 systemd-logind[1551]: Session 6 logged out. Waiting for processes to exit. Apr 17 03:02:58.999385 systemd[1]: Started sshd@6-10.0.0.7:22-10.0.0.1:59516.service - OpenSSH per-connection server daemon (10.0.0.1:59516). Apr 17 03:02:59.000167 systemd-logind[1551]: Removed session 6. Apr 17 03:02:59.043607 sshd[1774]: Accepted publickey for core from 10.0.0.1 port 59516 ssh2: RSA SHA256:FVrkeUr4F1DUvuGbghPLjRpHgCWbfVIbP6ixe+jkRU8 Apr 17 03:02:59.044418 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 03:02:59.048188 systemd-logind[1551]: New session 7 of user core. Apr 17 03:02:59.060197 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 03:02:59.069208 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 03:02:59.069391 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 03:02:59.305069 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 03:02:59.319432 (dockerd)[1799]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 03:02:59.501425 dockerd[1799]: time="2026-04-17T03:02:59.501358726Z" level=info msg="Starting up" Apr 17 03:02:59.502089 dockerd[1799]: time="2026-04-17T03:02:59.502018531Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 17 03:02:59.512549 dockerd[1799]: time="2026-04-17T03:02:59.512492005Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 17 03:02:59.627467 dockerd[1799]: time="2026-04-17T03:02:59.627300338Z" level=info msg="Loading containers: start." Apr 17 03:02:59.636965 kernel: Initializing XFRM netlink socket Apr 17 03:02:59.837146 systemd-networkd[1483]: docker0: Link UP Apr 17 03:02:59.841835 dockerd[1799]: time="2026-04-17T03:02:59.841782370Z" level=info msg="Loading containers: done." Apr 17 03:02:59.855192 dockerd[1799]: time="2026-04-17T03:02:59.855143353Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 03:02:59.855294 dockerd[1799]: time="2026-04-17T03:02:59.855226716Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 17 03:02:59.855294 dockerd[1799]: time="2026-04-17T03:02:59.855282164Z" level=info msg="Initializing buildkit" Apr 17 03:02:59.878423 dockerd[1799]: time="2026-04-17T03:02:59.878186123Z" level=info msg="Completed buildkit initialization" Apr 17 03:02:59.882487 dockerd[1799]: time="2026-04-17T03:02:59.882431790Z" level=info msg="Daemon has completed initialization" Apr 17 03:02:59.882551 dockerd[1799]: time="2026-04-17T03:02:59.882499919Z" level=info msg="API listen on /run/docker.sock" Apr 17 03:02:59.882717 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 03:03:00.242490 containerd[1572]: time="2026-04-17T03:03:00.242380467Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 17 03:03:00.735092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount86594637.mount: Deactivated successfully. Apr 17 03:03:01.333007 containerd[1572]: time="2026-04-17T03:03:01.332949214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:01.333788 containerd[1572]: time="2026-04-17T03:03:01.333755342Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27099952" Apr 17 03:03:01.334732 containerd[1572]: time="2026-04-17T03:03:01.334696396Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:01.336724 containerd[1572]: time="2026-04-17T03:03:01.336688706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:01.337406 containerd[1572]: time="2026-04-17T03:03:01.337386678Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 1.094972796s" Apr 17 03:03:01.337441 containerd[1572]: time="2026-04-17T03:03:01.337413463Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 17 03:03:01.338194 containerd[1572]: time="2026-04-17T03:03:01.337939567Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 17 03:03:02.082204 containerd[1572]: time="2026-04-17T03:03:02.082135166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:02.082858 containerd[1572]: time="2026-04-17T03:03:02.082824765Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252670" Apr 17 03:03:02.083943 containerd[1572]: time="2026-04-17T03:03:02.083887034Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:02.085996 containerd[1572]: time="2026-04-17T03:03:02.085964246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:02.086580 containerd[1572]: time="2026-04-17T03:03:02.086556738Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 748.589342ms" Apr 17 03:03:02.086648 containerd[1572]: time="2026-04-17T03:03:02.086582019Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 17 03:03:02.087045 containerd[1572]: time="2026-04-17T03:03:02.087018982Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 17 03:03:02.678015 containerd[1572]: time="2026-04-17T03:03:02.677945900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:02.678679 containerd[1572]: time="2026-04-17T03:03:02.678654959Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810823" Apr 17 03:03:02.679735 containerd[1572]: time="2026-04-17T03:03:02.679682576Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:02.681935 containerd[1572]: time="2026-04-17T03:03:02.681867919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:02.682539 containerd[1572]: time="2026-04-17T03:03:02.682503777Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 595.449151ms" Apr 17 03:03:02.682539 containerd[1572]: time="2026-04-17T03:03:02.682538849Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 17 03:03:02.683022 containerd[1572]: time="2026-04-17T03:03:02.682998761Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 17 03:03:03.432634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2786086304.mount: Deactivated successfully. Apr 17 03:03:03.617091 containerd[1572]: time="2026-04-17T03:03:03.617014612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:03.617533 containerd[1572]: time="2026-04-17T03:03:03.617508906Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972848" Apr 17 03:03:03.618584 containerd[1572]: time="2026-04-17T03:03:03.618535442Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:03.620142 containerd[1572]: time="2026-04-17T03:03:03.620101302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:03.620556 containerd[1572]: time="2026-04-17T03:03:03.620534204Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 937.508657ms" Apr 17 03:03:03.620584 containerd[1572]: time="2026-04-17T03:03:03.620561684Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 17 03:03:03.621124 containerd[1572]: time="2026-04-17T03:03:03.620972402Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 17 03:03:04.050696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount350084936.mount: Deactivated successfully. Apr 17 03:03:04.630950 containerd[1572]: time="2026-04-17T03:03:04.630859233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:04.631523 containerd[1572]: time="2026-04-17T03:03:04.631474490Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22387483" Apr 17 03:03:04.632476 containerd[1572]: time="2026-04-17T03:03:04.632435632Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:04.634828 containerd[1572]: time="2026-04-17T03:03:04.634553132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:04.635490 containerd[1572]: time="2026-04-17T03:03:04.635457371Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.014460617s" Apr 17 03:03:04.635551 containerd[1572]: time="2026-04-17T03:03:04.635541560Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 17 03:03:04.637227 containerd[1572]: time="2026-04-17T03:03:04.637198086Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 17 03:03:05.013630 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 03:03:05.014809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 03:03:05.018260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount9744488.mount: Deactivated successfully. Apr 17 03:03:05.024672 containerd[1572]: time="2026-04-17T03:03:05.024623694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:05.025257 containerd[1572]: time="2026-04-17T03:03:05.025226261Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 17 03:03:05.026103 containerd[1572]: time="2026-04-17T03:03:05.026075492Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:05.027464 containerd[1572]: time="2026-04-17T03:03:05.027431527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:05.027864 containerd[1572]: time="2026-04-17T03:03:05.027838654Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 390.605986ms" Apr 17 03:03:05.027892 containerd[1572]: time="2026-04-17T03:03:05.027870256Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 17 03:03:05.028339 containerd[1572]: time="2026-04-17T03:03:05.028318084Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 17 03:03:05.159652 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 03:03:05.182388 (kubelet)[2157]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 03:03:05.220330 kubelet[2157]: E0417 03:03:05.220262 2157 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 03:03:05.222980 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 03:03:05.223087 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 03:03:05.223330 systemd[1]: kubelet.service: Consumed 142ms CPU time, 111.3M memory peak. Apr 17 03:03:05.396142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1397986855.mount: Deactivated successfully. Apr 17 03:03:05.963216 containerd[1572]: time="2026-04-17T03:03:05.963130208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:05.964336 containerd[1572]: time="2026-04-17T03:03:05.964304505Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874255" Apr 17 03:03:05.965183 containerd[1572]: time="2026-04-17T03:03:05.965151194Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:05.967583 containerd[1572]: time="2026-04-17T03:03:05.967523939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:05.968691 containerd[1572]: time="2026-04-17T03:03:05.968649297Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 940.301378ms" Apr 17 03:03:05.968691 containerd[1572]: time="2026-04-17T03:03:05.968687090Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 17 03:03:07.481558 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 03:03:07.481681 systemd[1]: kubelet.service: Consumed 142ms CPU time, 111.3M memory peak. Apr 17 03:03:07.483563 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 03:03:07.502302 systemd[1]: Reload requested from client PID 2257 ('systemctl') (unit session-7.scope)... Apr 17 03:03:07.502321 systemd[1]: Reloading... Apr 17 03:03:07.554956 zram_generator::config[2300]: No configuration found. Apr 17 03:03:07.703349 systemd[1]: Reloading finished in 200 ms. Apr 17 03:03:07.749242 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 17 03:03:07.749307 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 17 03:03:07.749533 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 03:03:07.749577 systemd[1]: kubelet.service: Consumed 79ms CPU time, 98.3M memory peak. Apr 17 03:03:07.750853 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 03:03:07.857278 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 03:03:07.860826 (kubelet)[2348]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 03:03:07.895003 kubelet[2348]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 03:03:07.895003 kubelet[2348]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 03:03:07.895003 kubelet[2348]: I0417 03:03:07.894699 2348 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 03:03:08.267549 kubelet[2348]: I0417 03:03:08.267501 2348 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 17 03:03:08.267549 kubelet[2348]: I0417 03:03:08.267535 2348 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 03:03:08.269019 kubelet[2348]: I0417 03:03:08.268993 2348 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 03:03:08.269019 kubelet[2348]: I0417 03:03:08.269016 2348 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 03:03:08.269238 kubelet[2348]: I0417 03:03:08.269213 2348 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 03:03:08.338865 kubelet[2348]: E0417 03:03:08.338820 2348 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 03:03:08.339047 kubelet[2348]: I0417 03:03:08.338938 2348 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 03:03:08.342179 kubelet[2348]: I0417 03:03:08.342129 2348 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 17 03:03:08.345747 kubelet[2348]: I0417 03:03:08.345731 2348 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 03:03:08.346635 kubelet[2348]: I0417 03:03:08.346593 2348 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 03:03:08.346764 kubelet[2348]: I0417 03:03:08.346629 2348 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 03:03:08.346764 kubelet[2348]: I0417 03:03:08.346759 2348 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 03:03:08.346764 kubelet[2348]: I0417 03:03:08.346766 2348 container_manager_linux.go:306] "Creating device plugin manager" Apr 17 03:03:08.346929 kubelet[2348]: I0417 03:03:08.346836 2348 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 03:03:08.348929 kubelet[2348]: I0417 03:03:08.348892 2348 state_mem.go:36] "Initialized new in-memory state store" Apr 17 03:03:08.349062 kubelet[2348]: I0417 03:03:08.349039 2348 kubelet.go:475] "Attempting to sync node with API server" Apr 17 03:03:08.349062 kubelet[2348]: I0417 03:03:08.349054 2348 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 03:03:08.349097 kubelet[2348]: I0417 03:03:08.349075 2348 kubelet.go:387] "Adding apiserver pod source" Apr 17 03:03:08.349434 kubelet[2348]: E0417 03:03:08.349398 2348 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 03:03:08.350318 kubelet[2348]: I0417 03:03:08.350274 2348 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 03:03:08.350734 kubelet[2348]: E0417 03:03:08.350710 2348 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 03:03:08.351642 kubelet[2348]: I0417 03:03:08.351627 2348 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 17 03:03:08.352093 kubelet[2348]: I0417 03:03:08.352079 2348 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 03:03:08.352308 kubelet[2348]: I0417 03:03:08.352111 2348 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 03:03:08.352929 kubelet[2348]: W0417 03:03:08.352349 2348 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 03:03:08.355248 kubelet[2348]: I0417 03:03:08.355217 2348 server.go:1262] "Started kubelet" Apr 17 03:03:08.355333 kubelet[2348]: I0417 03:03:08.355309 2348 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 03:03:08.355367 kubelet[2348]: I0417 03:03:08.355353 2348 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 03:03:08.355683 kubelet[2348]: I0417 03:03:08.355664 2348 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 03:03:08.356620 kubelet[2348]: I0417 03:03:08.356603 2348 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 03:03:08.358090 kubelet[2348]: I0417 03:03:08.358057 2348 server.go:310] "Adding debug handlers to kubelet server" Apr 17 03:03:08.358357 kubelet[2348]: I0417 03:03:08.358330 2348 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 03:03:08.359447 kubelet[2348]: I0417 03:03:08.358819 2348 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 03:03:08.359626 kubelet[2348]: I0417 03:03:08.359600 2348 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 17 03:03:08.359702 kubelet[2348]: E0417 03:03:08.359679 2348 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 03:03:08.359735 kubelet[2348]: I0417 03:03:08.359715 2348 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 03:03:08.359770 kubelet[2348]: I0417 03:03:08.359752 2348 reconciler.go:29] "Reconciler: start to sync state" Apr 17 03:03:08.360018 kubelet[2348]: E0417 03:03:08.359987 2348 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 03:03:08.360101 kubelet[2348]: E0417 03:03:08.360056 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="200ms" Apr 17 03:03:08.360602 kubelet[2348]: I0417 03:03:08.360566 2348 factory.go:223] Registration of the systemd container factory successfully Apr 17 03:03:08.360655 kubelet[2348]: I0417 03:03:08.360638 2348 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 03:03:08.361528 kubelet[2348]: I0417 03:03:08.361479 2348 factory.go:223] Registration of the containerd container factory successfully Apr 17 03:03:08.361684 kubelet[2348]: E0417 03:03:08.360799 2348 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.7:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a705d485b1efba default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 03:03:08.355186618 +0000 UTC m=+0.491256723,LastTimestamp:2026-04-17 03:03:08.355186618 +0000 UTC m=+0.491256723,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 03:03:08.361753 kubelet[2348]: E0417 03:03:08.361742 2348 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 03:03:08.368208 kubelet[2348]: I0417 03:03:08.368177 2348 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 03:03:08.368254 kubelet[2348]: I0417 03:03:08.368219 2348 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 03:03:08.368254 kubelet[2348]: I0417 03:03:08.368230 2348 state_mem.go:36] "Initialized new in-memory state store" Apr 17 03:03:08.371782 kubelet[2348]: I0417 03:03:08.371239 2348 policy_none.go:49] "None policy: Start" Apr 17 03:03:08.371782 kubelet[2348]: I0417 03:03:08.371253 2348 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 03:03:08.371782 kubelet[2348]: I0417 03:03:08.371261 2348 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 03:03:08.372642 kubelet[2348]: I0417 03:03:08.372610 2348 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 03:03:08.373941 kubelet[2348]: I0417 03:03:08.373023 2348 policy_none.go:47] "Start" Apr 17 03:03:08.373941 kubelet[2348]: I0417 03:03:08.373518 2348 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 03:03:08.373941 kubelet[2348]: I0417 03:03:08.373529 2348 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 17 03:03:08.373941 kubelet[2348]: I0417 03:03:08.373553 2348 kubelet.go:2428] "Starting kubelet main sync loop" Apr 17 03:03:08.373941 kubelet[2348]: E0417 03:03:08.373588 2348 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 03:03:08.377524 kubelet[2348]: E0417 03:03:08.377483 2348 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 03:03:08.380140 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 17 03:03:08.393151 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 17 03:03:08.404964 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 17 03:03:08.406003 kubelet[2348]: E0417 03:03:08.405982 2348 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 03:03:08.406153 kubelet[2348]: I0417 03:03:08.406122 2348 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 03:03:08.406367 kubelet[2348]: I0417 03:03:08.406142 2348 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 03:03:08.406367 kubelet[2348]: I0417 03:03:08.406292 2348 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 03:03:08.407200 kubelet[2348]: E0417 03:03:08.407182 2348 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 03:03:08.407250 kubelet[2348]: E0417 03:03:08.407218 2348 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 03:03:08.483609 systemd[1]: Created slice kubepods-burstable-pod33805da6ceee61fcd3549bb152aaab50.slice - libcontainer container kubepods-burstable-pod33805da6ceee61fcd3549bb152aaab50.slice. Apr 17 03:03:08.490456 kubelet[2348]: E0417 03:03:08.490415 2348 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 03:03:08.492607 systemd[1]: Created slice kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice - libcontainer container kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice. Apr 17 03:03:08.493658 kubelet[2348]: E0417 03:03:08.493639 2348 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 03:03:08.495057 systemd[1]: Created slice kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice - libcontainer container kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice. Apr 17 03:03:08.496065 kubelet[2348]: E0417 03:03:08.496033 2348 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 03:03:08.507860 kubelet[2348]: I0417 03:03:08.507825 2348 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 03:03:08.508181 kubelet[2348]: E0417 03:03:08.508158 2348 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Apr 17 03:03:08.561512 kubelet[2348]: I0417 03:03:08.560826 2348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33805da6ceee61fcd3549bb152aaab50-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"33805da6ceee61fcd3549bb152aaab50\") " pod="kube-system/kube-apiserver-localhost" Apr 17 03:03:08.561512 kubelet[2348]: I0417 03:03:08.560865 2348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33805da6ceee61fcd3549bb152aaab50-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"33805da6ceee61fcd3549bb152aaab50\") " pod="kube-system/kube-apiserver-localhost" Apr 17 03:03:08.561512 kubelet[2348]: I0417 03:03:08.560881 2348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 03:03:08.561512 kubelet[2348]: I0417 03:03:08.560896 2348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 03:03:08.561512 kubelet[2348]: I0417 03:03:08.560943 2348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 03:03:08.561690 kubelet[2348]: I0417 03:03:08.560958 2348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 03:03:08.561690 kubelet[2348]: I0417 03:03:08.560972 2348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 17 03:03:08.561690 kubelet[2348]: I0417 03:03:08.560984 2348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33805da6ceee61fcd3549bb152aaab50-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"33805da6ceee61fcd3549bb152aaab50\") " pod="kube-system/kube-apiserver-localhost" Apr 17 03:03:08.561690 kubelet[2348]: I0417 03:03:08.560998 2348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 03:03:08.561690 kubelet[2348]: E0417 03:03:08.561203 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="400ms" Apr 17 03:03:08.711724 kubelet[2348]: I0417 03:03:08.711673 2348 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 03:03:08.712017 kubelet[2348]: E0417 03:03:08.711994 2348 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Apr 17 03:03:08.795872 containerd[1572]: time="2026-04-17T03:03:08.795796722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:33805da6ceee61fcd3549bb152aaab50,Namespace:kube-system,Attempt:0,}" Apr 17 03:03:08.798072 containerd[1572]: time="2026-04-17T03:03:08.798036912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,}" Apr 17 03:03:08.800058 containerd[1572]: time="2026-04-17T03:03:08.800004851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,}" Apr 17 03:03:08.962480 kubelet[2348]: E0417 03:03:08.962331 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="800ms" Apr 17 03:03:09.114214 kubelet[2348]: I0417 03:03:09.114111 2348 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 03:03:09.114426 kubelet[2348]: E0417 03:03:09.114402 2348 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Apr 17 03:03:09.129453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1596246356.mount: Deactivated successfully. Apr 17 03:03:09.133967 containerd[1572]: time="2026-04-17T03:03:09.133871237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 03:03:09.134444 containerd[1572]: time="2026-04-17T03:03:09.134329870Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 17 03:03:09.135874 containerd[1572]: time="2026-04-17T03:03:09.135845375Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 03:03:09.138892 containerd[1572]: time="2026-04-17T03:03:09.138854692Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 03:03:09.139640 containerd[1572]: time="2026-04-17T03:03:09.139620596Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 03:03:09.140303 containerd[1572]: time="2026-04-17T03:03:09.140279130Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 17 03:03:09.140951 containerd[1572]: time="2026-04-17T03:03:09.140924460Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 17 03:03:09.141668 containerd[1572]: time="2026-04-17T03:03:09.141647641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 03:03:09.142067 containerd[1572]: time="2026-04-17T03:03:09.142047467Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 340.649873ms" Apr 17 03:03:09.143806 containerd[1572]: time="2026-04-17T03:03:09.143780395Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 343.197352ms" Apr 17 03:03:09.145460 containerd[1572]: time="2026-04-17T03:03:09.145434957Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 347.72965ms" Apr 17 03:03:09.161812 containerd[1572]: time="2026-04-17T03:03:09.161778825Z" level=info msg="connecting to shim 5d870f588882f0c6ea2d99200ce112db4c45eb6fb4e3604961bd8a9d1dd8bfb1" address="unix:///run/containerd/s/0f20f43202e1648247c57ba4d2917652d3ca87bc85fd6647428b784d9e285efe" namespace=k8s.io protocol=ttrpc version=3 Apr 17 03:03:09.168696 containerd[1572]: time="2026-04-17T03:03:09.167994823Z" level=info msg="connecting to shim cf971236a7d3b0f60d117e62163b99350a993c8f06fa67b2f9c4946509af9eff" address="unix:///run/containerd/s/ed357b56c4f1375d02e15197e3bae053e1e23224d6f33d1519930ed763057703" namespace=k8s.io protocol=ttrpc version=3 Apr 17 03:03:09.176275 containerd[1572]: time="2026-04-17T03:03:09.176194219Z" level=info msg="connecting to shim c945258409f51f49911928f1ae6bb1e0f1d43af65bc84b8356bb0c17d688e4f7" address="unix:///run/containerd/s/5520c18da835e14f4370867b8f058bdbd251e20058ab3ebf137968454b77c90d" namespace=k8s.io protocol=ttrpc version=3 Apr 17 03:03:09.184078 systemd[1]: Started cri-containerd-5d870f588882f0c6ea2d99200ce112db4c45eb6fb4e3604961bd8a9d1dd8bfb1.scope - libcontainer container 5d870f588882f0c6ea2d99200ce112db4c45eb6fb4e3604961bd8a9d1dd8bfb1. Apr 17 03:03:09.186260 systemd[1]: Started cri-containerd-cf971236a7d3b0f60d117e62163b99350a993c8f06fa67b2f9c4946509af9eff.scope - libcontainer container cf971236a7d3b0f60d117e62163b99350a993c8f06fa67b2f9c4946509af9eff. Apr 17 03:03:09.211192 systemd[1]: Started cri-containerd-c945258409f51f49911928f1ae6bb1e0f1d43af65bc84b8356bb0c17d688e4f7.scope - libcontainer container c945258409f51f49911928f1ae6bb1e0f1d43af65bc84b8356bb0c17d688e4f7. Apr 17 03:03:09.249292 containerd[1572]: time="2026-04-17T03:03:09.248975515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d870f588882f0c6ea2d99200ce112db4c45eb6fb4e3604961bd8a9d1dd8bfb1\"" Apr 17 03:03:09.252153 containerd[1572]: time="2026-04-17T03:03:09.252087809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf971236a7d3b0f60d117e62163b99350a993c8f06fa67b2f9c4946509af9eff\"" Apr 17 03:03:09.254053 containerd[1572]: time="2026-04-17T03:03:09.254014931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:33805da6ceee61fcd3549bb152aaab50,Namespace:kube-system,Attempt:0,} returns sandbox id \"c945258409f51f49911928f1ae6bb1e0f1d43af65bc84b8356bb0c17d688e4f7\"" Apr 17 03:03:09.254227 containerd[1572]: time="2026-04-17T03:03:09.254157487Z" level=info msg="CreateContainer within sandbox \"5d870f588882f0c6ea2d99200ce112db4c45eb6fb4e3604961bd8a9d1dd8bfb1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 03:03:09.255849 containerd[1572]: time="2026-04-17T03:03:09.255821592Z" level=info msg="CreateContainer within sandbox \"cf971236a7d3b0f60d117e62163b99350a993c8f06fa67b2f9c4946509af9eff\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 03:03:09.257726 containerd[1572]: time="2026-04-17T03:03:09.257700526Z" level=info msg="CreateContainer within sandbox \"c945258409f51f49911928f1ae6bb1e0f1d43af65bc84b8356bb0c17d688e4f7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 03:03:09.263061 containerd[1572]: time="2026-04-17T03:03:09.263025013Z" level=info msg="Container 9cb5395cfedb6c99891a40750b189062bbac2a6577fd6c44c4a831893d38f5b1: CDI devices from CRI Config.CDIDevices: []" Apr 17 03:03:09.266015 containerd[1572]: time="2026-04-17T03:03:09.265986451Z" level=info msg="Container 3f45846431c94b53eaa7970d09b24280638ab3a8f570bbf01e8f690f2b1cdd61: CDI devices from CRI Config.CDIDevices: []" Apr 17 03:03:09.270586 containerd[1572]: time="2026-04-17T03:03:09.270549268Z" level=info msg="Container 91dc1e1d9f3586f63fd817d5ac309581297bf80da8ed5b354cae4f6b2dd8cebc: CDI devices from CRI Config.CDIDevices: []" Apr 17 03:03:09.271228 containerd[1572]: time="2026-04-17T03:03:09.271202095Z" level=info msg="CreateContainer within sandbox \"5d870f588882f0c6ea2d99200ce112db4c45eb6fb4e3604961bd8a9d1dd8bfb1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9cb5395cfedb6c99891a40750b189062bbac2a6577fd6c44c4a831893d38f5b1\"" Apr 17 03:03:09.271715 containerd[1572]: time="2026-04-17T03:03:09.271676823Z" level=info msg="StartContainer for \"9cb5395cfedb6c99891a40750b189062bbac2a6577fd6c44c4a831893d38f5b1\"" Apr 17 03:03:09.272493 containerd[1572]: time="2026-04-17T03:03:09.272472518Z" level=info msg="connecting to shim 9cb5395cfedb6c99891a40750b189062bbac2a6577fd6c44c4a831893d38f5b1" address="unix:///run/containerd/s/0f20f43202e1648247c57ba4d2917652d3ca87bc85fd6647428b784d9e285efe" protocol=ttrpc version=3 Apr 17 03:03:09.273304 containerd[1572]: time="2026-04-17T03:03:09.273257865Z" level=info msg="CreateContainer within sandbox \"cf971236a7d3b0f60d117e62163b99350a993c8f06fa67b2f9c4946509af9eff\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3f45846431c94b53eaa7970d09b24280638ab3a8f570bbf01e8f690f2b1cdd61\"" Apr 17 03:03:09.273637 containerd[1572]: time="2026-04-17T03:03:09.273614168Z" level=info msg="StartContainer for \"3f45846431c94b53eaa7970d09b24280638ab3a8f570bbf01e8f690f2b1cdd61\"" Apr 17 03:03:09.274349 containerd[1572]: time="2026-04-17T03:03:09.274320285Z" level=info msg="connecting to shim 3f45846431c94b53eaa7970d09b24280638ab3a8f570bbf01e8f690f2b1cdd61" address="unix:///run/containerd/s/ed357b56c4f1375d02e15197e3bae053e1e23224d6f33d1519930ed763057703" protocol=ttrpc version=3 Apr 17 03:03:09.276990 containerd[1572]: time="2026-04-17T03:03:09.276955089Z" level=info msg="CreateContainer within sandbox \"c945258409f51f49911928f1ae6bb1e0f1d43af65bc84b8356bb0c17d688e4f7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"91dc1e1d9f3586f63fd817d5ac309581297bf80da8ed5b354cae4f6b2dd8cebc\"" Apr 17 03:03:09.277291 containerd[1572]: time="2026-04-17T03:03:09.277276685Z" level=info msg="StartContainer for \"91dc1e1d9f3586f63fd817d5ac309581297bf80da8ed5b354cae4f6b2dd8cebc\"" Apr 17 03:03:09.278322 containerd[1572]: time="2026-04-17T03:03:09.278305022Z" level=info msg="connecting to shim 91dc1e1d9f3586f63fd817d5ac309581297bf80da8ed5b354cae4f6b2dd8cebc" address="unix:///run/containerd/s/5520c18da835e14f4370867b8f058bdbd251e20058ab3ebf137968454b77c90d" protocol=ttrpc version=3 Apr 17 03:03:09.288061 systemd[1]: Started cri-containerd-9cb5395cfedb6c99891a40750b189062bbac2a6577fd6c44c4a831893d38f5b1.scope - libcontainer container 9cb5395cfedb6c99891a40750b189062bbac2a6577fd6c44c4a831893d38f5b1. Apr 17 03:03:09.291183 systemd[1]: Started cri-containerd-3f45846431c94b53eaa7970d09b24280638ab3a8f570bbf01e8f690f2b1cdd61.scope - libcontainer container 3f45846431c94b53eaa7970d09b24280638ab3a8f570bbf01e8f690f2b1cdd61. Apr 17 03:03:09.292031 systemd[1]: Started cri-containerd-91dc1e1d9f3586f63fd817d5ac309581297bf80da8ed5b354cae4f6b2dd8cebc.scope - libcontainer container 91dc1e1d9f3586f63fd817d5ac309581297bf80da8ed5b354cae4f6b2dd8cebc. Apr 17 03:03:09.339660 containerd[1572]: time="2026-04-17T03:03:09.339568738Z" level=info msg="StartContainer for \"9cb5395cfedb6c99891a40750b189062bbac2a6577fd6c44c4a831893d38f5b1\" returns successfully" Apr 17 03:03:09.340310 containerd[1572]: time="2026-04-17T03:03:09.340242497Z" level=info msg="StartContainer for \"3f45846431c94b53eaa7970d09b24280638ab3a8f570bbf01e8f690f2b1cdd61\" returns successfully" Apr 17 03:03:09.341825 containerd[1572]: time="2026-04-17T03:03:09.341769138Z" level=info msg="StartContainer for \"91dc1e1d9f3586f63fd817d5ac309581297bf80da8ed5b354cae4f6b2dd8cebc\" returns successfully" Apr 17 03:03:09.384026 kubelet[2348]: E0417 03:03:09.383989 2348 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 03:03:09.385359 kubelet[2348]: E0417 03:03:09.385335 2348 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 03:03:09.387756 kubelet[2348]: E0417 03:03:09.387743 2348 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 03:03:09.916008 kubelet[2348]: I0417 03:03:09.915961 2348 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 03:03:10.116428 kubelet[2348]: E0417 03:03:10.116375 2348 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 17 03:03:10.211024 kubelet[2348]: I0417 03:03:10.209900 2348 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 17 03:03:10.211024 kubelet[2348]: E0417 03:03:10.209960 2348 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 17 03:03:10.260724 kubelet[2348]: I0417 03:03:10.260647 2348 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 03:03:10.266062 kubelet[2348]: E0417 03:03:10.266027 2348 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 17 03:03:10.266062 kubelet[2348]: I0417 03:03:10.266056 2348 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 03:03:10.267399 kubelet[2348]: E0417 03:03:10.267347 2348 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 17 03:03:10.267553 kubelet[2348]: I0417 03:03:10.267423 2348 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 03:03:10.268768 kubelet[2348]: E0417 03:03:10.268748 2348 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 17 03:03:10.352337 kubelet[2348]: I0417 03:03:10.352270 2348 apiserver.go:52] "Watching apiserver" Apr 17 03:03:10.360440 kubelet[2348]: I0417 03:03:10.360394 2348 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 03:03:10.388549 kubelet[2348]: I0417 03:03:10.388502 2348 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 03:03:10.388693 kubelet[2348]: I0417 03:03:10.388638 2348 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 03:03:10.390044 kubelet[2348]: E0417 03:03:10.390023 2348 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 17 03:03:10.390325 kubelet[2348]: E0417 03:03:10.390297 2348 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 17 03:03:11.390942 kubelet[2348]: I0417 03:03:11.390274 2348 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 03:03:11.395519 kubelet[2348]: E0417 03:03:11.395497 2348 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:12.245982 kubelet[2348]: I0417 03:03:12.245901 2348 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 03:03:12.251074 kubelet[2348]: E0417 03:03:12.251033 2348 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:12.298120 systemd[1]: Reload requested from client PID 2637 ('systemctl') (unit session-7.scope)... Apr 17 03:03:12.298139 systemd[1]: Reloading... Apr 17 03:03:12.349978 zram_generator::config[2680]: No configuration found. Apr 17 03:03:12.392265 kubelet[2348]: E0417 03:03:12.392223 2348 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:12.392517 kubelet[2348]: E0417 03:03:12.392305 2348 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:12.498523 systemd[1]: Reloading finished in 200 ms. Apr 17 03:03:12.521439 kubelet[2348]: I0417 03:03:12.521371 2348 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 03:03:12.521388 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 03:03:12.536197 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 03:03:12.536426 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 03:03:12.536476 systemd[1]: kubelet.service: Consumed 755ms CPU time, 125.8M memory peak. Apr 17 03:03:12.537791 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 03:03:12.668041 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 03:03:12.675143 (kubelet)[2725]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 03:03:12.706938 kubelet[2725]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 03:03:12.706938 kubelet[2725]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 03:03:12.707216 kubelet[2725]: I0417 03:03:12.707047 2725 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 03:03:12.712598 kubelet[2725]: I0417 03:03:12.712558 2725 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 17 03:03:12.712598 kubelet[2725]: I0417 03:03:12.712583 2725 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 03:03:12.712598 kubelet[2725]: I0417 03:03:12.712600 2725 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 03:03:12.712598 kubelet[2725]: I0417 03:03:12.712605 2725 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 03:03:12.712755 kubelet[2725]: I0417 03:03:12.712739 2725 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 03:03:12.713621 kubelet[2725]: I0417 03:03:12.713572 2725 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 03:03:12.715844 kubelet[2725]: I0417 03:03:12.715730 2725 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 03:03:12.720771 kubelet[2725]: I0417 03:03:12.720749 2725 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 17 03:03:12.724415 kubelet[2725]: I0417 03:03:12.724382 2725 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 03:03:12.724555 kubelet[2725]: I0417 03:03:12.724522 2725 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 03:03:12.724689 kubelet[2725]: I0417 03:03:12.724547 2725 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 03:03:12.724689 kubelet[2725]: I0417 03:03:12.724684 2725 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 03:03:12.724689 kubelet[2725]: I0417 03:03:12.724691 2725 container_manager_linux.go:306] "Creating device plugin manager" Apr 17 03:03:12.724790 kubelet[2725]: I0417 03:03:12.724710 2725 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 03:03:12.724865 kubelet[2725]: I0417 03:03:12.724839 2725 state_mem.go:36] "Initialized new in-memory state store" Apr 17 03:03:12.725005 kubelet[2725]: I0417 03:03:12.724993 2725 kubelet.go:475] "Attempting to sync node with API server" Apr 17 03:03:12.725027 kubelet[2725]: I0417 03:03:12.725007 2725 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 03:03:12.725027 kubelet[2725]: I0417 03:03:12.725022 2725 kubelet.go:387] "Adding apiserver pod source" Apr 17 03:03:12.725059 kubelet[2725]: I0417 03:03:12.725029 2725 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 03:03:12.725609 kubelet[2725]: I0417 03:03:12.725585 2725 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 17 03:03:12.728886 kubelet[2725]: I0417 03:03:12.726062 2725 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 03:03:12.728886 kubelet[2725]: I0417 03:03:12.726084 2725 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 03:03:12.730562 kubelet[2725]: I0417 03:03:12.730551 2725 server.go:1262] "Started kubelet" Apr 17 03:03:12.730856 kubelet[2725]: I0417 03:03:12.730826 2725 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 03:03:12.731086 kubelet[2725]: I0417 03:03:12.731037 2725 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 03:03:12.731665 kubelet[2725]: I0417 03:03:12.731655 2725 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 03:03:12.732613 kubelet[2725]: I0417 03:03:12.731957 2725 server.go:310] "Adding debug handlers to kubelet server" Apr 17 03:03:12.732796 kubelet[2725]: I0417 03:03:12.732787 2725 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 03:03:12.734191 kubelet[2725]: I0417 03:03:12.734105 2725 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 03:03:12.734258 kubelet[2725]: I0417 03:03:12.734201 2725 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 03:03:12.734541 kubelet[2725]: E0417 03:03:12.734511 2725 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 03:03:12.735087 kubelet[2725]: I0417 03:03:12.735069 2725 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 17 03:03:12.735226 kubelet[2725]: I0417 03:03:12.735137 2725 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 03:03:12.735226 kubelet[2725]: I0417 03:03:12.735197 2725 reconciler.go:29] "Reconciler: start to sync state" Apr 17 03:03:12.738089 kubelet[2725]: I0417 03:03:12.738062 2725 factory.go:223] Registration of the containerd container factory successfully Apr 17 03:03:12.738089 kubelet[2725]: I0417 03:03:12.738082 2725 factory.go:223] Registration of the systemd container factory successfully Apr 17 03:03:12.738804 kubelet[2725]: I0417 03:03:12.738160 2725 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 03:03:12.745341 kubelet[2725]: I0417 03:03:12.745293 2725 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 03:03:12.746347 kubelet[2725]: I0417 03:03:12.746334 2725 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 03:03:12.746438 kubelet[2725]: I0417 03:03:12.746432 2725 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 17 03:03:12.746483 kubelet[2725]: I0417 03:03:12.746479 2725 kubelet.go:2428] "Starting kubelet main sync loop" Apr 17 03:03:12.746537 kubelet[2725]: E0417 03:03:12.746528 2725 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 03:03:12.762546 kubelet[2725]: I0417 03:03:12.761755 2725 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 03:03:12.762546 kubelet[2725]: I0417 03:03:12.761778 2725 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 03:03:12.762546 kubelet[2725]: I0417 03:03:12.761803 2725 state_mem.go:36] "Initialized new in-memory state store" Apr 17 03:03:12.762546 kubelet[2725]: I0417 03:03:12.762498 2725 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 17 03:03:12.762546 kubelet[2725]: I0417 03:03:12.762509 2725 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 17 03:03:12.762546 kubelet[2725]: I0417 03:03:12.762529 2725 policy_none.go:49] "None policy: Start" Apr 17 03:03:12.762546 kubelet[2725]: I0417 03:03:12.762538 2725 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 03:03:12.762546 kubelet[2725]: I0417 03:03:12.762546 2725 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 03:03:12.762731 kubelet[2725]: I0417 03:03:12.762688 2725 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 17 03:03:12.762731 kubelet[2725]: I0417 03:03:12.762695 2725 policy_none.go:47] "Start" Apr 17 03:03:12.765976 kubelet[2725]: E0417 03:03:12.765955 2725 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 03:03:12.766100 kubelet[2725]: I0417 03:03:12.766078 2725 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 03:03:12.766149 kubelet[2725]: I0417 03:03:12.766099 2725 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 03:03:12.766354 kubelet[2725]: I0417 03:03:12.766321 2725 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 03:03:12.769124 kubelet[2725]: E0417 03:03:12.769078 2725 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 03:03:12.847981 kubelet[2725]: I0417 03:03:12.847936 2725 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 03:03:12.848165 kubelet[2725]: I0417 03:03:12.848027 2725 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 03:03:12.848165 kubelet[2725]: I0417 03:03:12.847945 2725 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 03:03:12.855114 kubelet[2725]: E0417 03:03:12.855073 2725 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 17 03:03:12.855468 kubelet[2725]: E0417 03:03:12.855418 2725 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 17 03:03:12.871868 kubelet[2725]: I0417 03:03:12.871810 2725 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 03:03:12.877691 kubelet[2725]: I0417 03:03:12.877669 2725 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 17 03:03:12.877764 kubelet[2725]: I0417 03:03:12.877727 2725 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 17 03:03:12.936437 kubelet[2725]: I0417 03:03:12.936394 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 03:03:12.936437 kubelet[2725]: I0417 03:03:12.936428 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 03:03:12.936579 kubelet[2725]: I0417 03:03:12.936451 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 03:03:12.936579 kubelet[2725]: I0417 03:03:12.936506 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33805da6ceee61fcd3549bb152aaab50-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"33805da6ceee61fcd3549bb152aaab50\") " pod="kube-system/kube-apiserver-localhost" Apr 17 03:03:12.936579 kubelet[2725]: I0417 03:03:12.936529 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33805da6ceee61fcd3549bb152aaab50-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"33805da6ceee61fcd3549bb152aaab50\") " pod="kube-system/kube-apiserver-localhost" Apr 17 03:03:12.936579 kubelet[2725]: I0417 03:03:12.936561 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 03:03:12.936649 kubelet[2725]: I0417 03:03:12.936607 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 03:03:12.936649 kubelet[2725]: I0417 03:03:12.936629 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 17 03:03:12.936649 kubelet[2725]: I0417 03:03:12.936646 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33805da6ceee61fcd3549bb152aaab50-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"33805da6ceee61fcd3549bb152aaab50\") " pod="kube-system/kube-apiserver-localhost" Apr 17 03:03:13.154106 kubelet[2725]: E0417 03:03:13.153956 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:13.155317 kubelet[2725]: E0417 03:03:13.155292 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:13.156422 kubelet[2725]: E0417 03:03:13.156394 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:13.726431 kubelet[2725]: I0417 03:03:13.726178 2725 apiserver.go:52] "Watching apiserver" Apr 17 03:03:13.736126 kubelet[2725]: I0417 03:03:13.736066 2725 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 03:03:13.755488 kubelet[2725]: E0417 03:03:13.755125 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:13.755829 kubelet[2725]: I0417 03:03:13.755810 2725 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 03:03:13.756168 kubelet[2725]: I0417 03:03:13.755972 2725 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 03:03:13.761461 kubelet[2725]: E0417 03:03:13.761247 2725 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 17 03:03:13.761461 kubelet[2725]: E0417 03:03:13.761399 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:13.761982 kubelet[2725]: E0417 03:03:13.761961 2725 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 17 03:03:13.762092 kubelet[2725]: E0417 03:03:13.762076 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:13.776384 kubelet[2725]: I0417 03:03:13.776279 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.7762677399999998 podStartE2EDuration="2.77626774s" podCreationTimestamp="2026-04-17 03:03:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 03:03:13.770133466 +0000 UTC m=+1.092151409" watchObservedRunningTime="2026-04-17 03:03:13.77626774 +0000 UTC m=+1.098285682" Apr 17 03:03:13.783335 kubelet[2725]: I0417 03:03:13.783265 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.783247583 podStartE2EDuration="1.783247583s" podCreationTimestamp="2026-04-17 03:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 03:03:13.776577179 +0000 UTC m=+1.098595112" watchObservedRunningTime="2026-04-17 03:03:13.783247583 +0000 UTC m=+1.105265516" Apr 17 03:03:13.790480 kubelet[2725]: I0417 03:03:13.790436 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.790423086 podStartE2EDuration="1.790423086s" podCreationTimestamp="2026-04-17 03:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 03:03:13.783477858 +0000 UTC m=+1.105495800" watchObservedRunningTime="2026-04-17 03:03:13.790423086 +0000 UTC m=+1.112441017" Apr 17 03:03:14.756661 kubelet[2725]: E0417 03:03:14.756618 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:14.757112 kubelet[2725]: E0417 03:03:14.756697 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:18.240195 kubelet[2725]: E0417 03:03:18.240119 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:19.028799 kubelet[2725]: I0417 03:03:19.028754 2725 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 03:03:19.029125 containerd[1572]: time="2026-04-17T03:03:19.029078944Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 03:03:19.029360 kubelet[2725]: I0417 03:03:19.029270 2725 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 03:03:20.096544 systemd[1]: Created slice kubepods-besteffort-pod93afe235_4f17_46f7_8b43_4f2e9b8f9aad.slice - libcontainer container kubepods-besteffort-pod93afe235_4f17_46f7_8b43_4f2e9b8f9aad.slice. Apr 17 03:03:20.182978 kubelet[2725]: I0417 03:03:20.182821 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93afe235-4f17-46f7-8b43-4f2e9b8f9aad-xtables-lock\") pod \"kube-proxy-5tp9w\" (UID: \"93afe235-4f17-46f7-8b43-4f2e9b8f9aad\") " pod="kube-system/kube-proxy-5tp9w" Apr 17 03:03:20.182978 kubelet[2725]: I0417 03:03:20.182892 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93afe235-4f17-46f7-8b43-4f2e9b8f9aad-lib-modules\") pod \"kube-proxy-5tp9w\" (UID: \"93afe235-4f17-46f7-8b43-4f2e9b8f9aad\") " pod="kube-system/kube-proxy-5tp9w" Apr 17 03:03:20.184130 kubelet[2725]: I0417 03:03:20.183146 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/93afe235-4f17-46f7-8b43-4f2e9b8f9aad-kube-proxy\") pod \"kube-proxy-5tp9w\" (UID: \"93afe235-4f17-46f7-8b43-4f2e9b8f9aad\") " pod="kube-system/kube-proxy-5tp9w" Apr 17 03:03:20.184130 kubelet[2725]: I0417 03:03:20.183178 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh6bg\" (UniqueName: \"kubernetes.io/projected/93afe235-4f17-46f7-8b43-4f2e9b8f9aad-kube-api-access-gh6bg\") pod \"kube-proxy-5tp9w\" (UID: \"93afe235-4f17-46f7-8b43-4f2e9b8f9aad\") " pod="kube-system/kube-proxy-5tp9w" Apr 17 03:03:20.213523 systemd[1]: Created slice kubepods-besteffort-pod3233f180_eb8f_416b_bb0b_e4ecc1d2ae16.slice - libcontainer container kubepods-besteffort-pod3233f180_eb8f_416b_bb0b_e4ecc1d2ae16.slice. Apr 17 03:03:20.284247 kubelet[2725]: I0417 03:03:20.284153 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3233f180-eb8f-416b-bb0b-e4ecc1d2ae16-var-lib-calico\") pod \"tigera-operator-5588576f44-vsr8w\" (UID: \"3233f180-eb8f-416b-bb0b-e4ecc1d2ae16\") " pod="tigera-operator/tigera-operator-5588576f44-vsr8w" Apr 17 03:03:20.284247 kubelet[2725]: I0417 03:03:20.284252 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2vdc\" (UniqueName: \"kubernetes.io/projected/3233f180-eb8f-416b-bb0b-e4ecc1d2ae16-kube-api-access-z2vdc\") pod \"tigera-operator-5588576f44-vsr8w\" (UID: \"3233f180-eb8f-416b-bb0b-e4ecc1d2ae16\") " pod="tigera-operator/tigera-operator-5588576f44-vsr8w" Apr 17 03:03:20.347987 kubelet[2725]: E0417 03:03:20.347837 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:20.407203 kubelet[2725]: E0417 03:03:20.407132 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:20.407839 containerd[1572]: time="2026-04-17T03:03:20.407783366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5tp9w,Uid:93afe235-4f17-46f7-8b43-4f2e9b8f9aad,Namespace:kube-system,Attempt:0,}" Apr 17 03:03:20.425140 containerd[1572]: time="2026-04-17T03:03:20.425086590Z" level=info msg="connecting to shim 89c0ddffff871960e404ff4c3d014edf208320b6a9b3a094d3450bb250290c4a" address="unix:///run/containerd/s/7fe9b47d1733f665b163af14ce45ad8de8eb38d9573f31a83747032dc4b002ea" namespace=k8s.io protocol=ttrpc version=3 Apr 17 03:03:20.449107 systemd[1]: Started cri-containerd-89c0ddffff871960e404ff4c3d014edf208320b6a9b3a094d3450bb250290c4a.scope - libcontainer container 89c0ddffff871960e404ff4c3d014edf208320b6a9b3a094d3450bb250290c4a. Apr 17 03:03:20.469716 containerd[1572]: time="2026-04-17T03:03:20.469371247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5tp9w,Uid:93afe235-4f17-46f7-8b43-4f2e9b8f9aad,Namespace:kube-system,Attempt:0,} returns sandbox id \"89c0ddffff871960e404ff4c3d014edf208320b6a9b3a094d3450bb250290c4a\"" Apr 17 03:03:20.470017 kubelet[2725]: E0417 03:03:20.470002 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:20.474181 containerd[1572]: time="2026-04-17T03:03:20.474143507Z" level=info msg="CreateContainer within sandbox \"89c0ddffff871960e404ff4c3d014edf208320b6a9b3a094d3450bb250290c4a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 03:03:20.482128 containerd[1572]: time="2026-04-17T03:03:20.481940267Z" level=info msg="Container 41f35060cdf6b6df381e847cb8b97d027f3975471c773725a99606b10ba8f413: CDI devices from CRI Config.CDIDevices: []" Apr 17 03:03:20.488223 containerd[1572]: time="2026-04-17T03:03:20.488179821Z" level=info msg="CreateContainer within sandbox \"89c0ddffff871960e404ff4c3d014edf208320b6a9b3a094d3450bb250290c4a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"41f35060cdf6b6df381e847cb8b97d027f3975471c773725a99606b10ba8f413\"" Apr 17 03:03:20.488630 containerd[1572]: time="2026-04-17T03:03:20.488597825Z" level=info msg="StartContainer for \"41f35060cdf6b6df381e847cb8b97d027f3975471c773725a99606b10ba8f413\"" Apr 17 03:03:20.489713 containerd[1572]: time="2026-04-17T03:03:20.489668750Z" level=info msg="connecting to shim 41f35060cdf6b6df381e847cb8b97d027f3975471c773725a99606b10ba8f413" address="unix:///run/containerd/s/7fe9b47d1733f665b163af14ce45ad8de8eb38d9573f31a83747032dc4b002ea" protocol=ttrpc version=3 Apr 17 03:03:20.506078 systemd[1]: Started cri-containerd-41f35060cdf6b6df381e847cb8b97d027f3975471c773725a99606b10ba8f413.scope - libcontainer container 41f35060cdf6b6df381e847cb8b97d027f3975471c773725a99606b10ba8f413. Apr 17 03:03:20.520306 containerd[1572]: time="2026-04-17T03:03:20.520252353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-vsr8w,Uid:3233f180-eb8f-416b-bb0b-e4ecc1d2ae16,Namespace:tigera-operator,Attempt:0,}" Apr 17 03:03:20.535128 containerd[1572]: time="2026-04-17T03:03:20.535088359Z" level=info msg="connecting to shim 4590d00110aae81f5b4d75a49c410361e7946bb5c68b47b3e4401fb73860218d" address="unix:///run/containerd/s/cc3043e1e87a08fc0e9aa406ea590b97f922670528af932e3d537fc20f0f046e" namespace=k8s.io protocol=ttrpc version=3 Apr 17 03:03:20.556085 systemd[1]: Started cri-containerd-4590d00110aae81f5b4d75a49c410361e7946bb5c68b47b3e4401fb73860218d.scope - libcontainer container 4590d00110aae81f5b4d75a49c410361e7946bb5c68b47b3e4401fb73860218d. Apr 17 03:03:20.561987 containerd[1572]: time="2026-04-17T03:03:20.561948982Z" level=info msg="StartContainer for \"41f35060cdf6b6df381e847cb8b97d027f3975471c773725a99606b10ba8f413\" returns successfully" Apr 17 03:03:20.598528 containerd[1572]: time="2026-04-17T03:03:20.598420129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-vsr8w,Uid:3233f180-eb8f-416b-bb0b-e4ecc1d2ae16,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4590d00110aae81f5b4d75a49c410361e7946bb5c68b47b3e4401fb73860218d\"" Apr 17 03:03:20.600364 containerd[1572]: time="2026-04-17T03:03:20.600299314Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 17 03:03:20.769531 kubelet[2725]: E0417 03:03:20.769494 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:20.769531 kubelet[2725]: E0417 03:03:20.769530 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:22.044191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2483880932.mount: Deactivated successfully. Apr 17 03:03:22.499280 containerd[1572]: time="2026-04-17T03:03:22.499161283Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:22.499811 containerd[1572]: time="2026-04-17T03:03:22.499766726Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 17 03:03:22.501317 containerd[1572]: time="2026-04-17T03:03:22.501255005Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:22.503268 containerd[1572]: time="2026-04-17T03:03:22.503244379Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:22.503579 containerd[1572]: time="2026-04-17T03:03:22.503541065Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 1.903092995s" Apr 17 03:03:22.503579 containerd[1572]: time="2026-04-17T03:03:22.503574600Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 17 03:03:22.507266 containerd[1572]: time="2026-04-17T03:03:22.507217962Z" level=info msg="CreateContainer within sandbox \"4590d00110aae81f5b4d75a49c410361e7946bb5c68b47b3e4401fb73860218d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 17 03:03:22.513084 containerd[1572]: time="2026-04-17T03:03:22.513056518Z" level=info msg="Container 31fe291af06e1a8e414dde0924f5699825088f750b2c782b6239bc7ca0a52172: CDI devices from CRI Config.CDIDevices: []" Apr 17 03:03:22.519227 containerd[1572]: time="2026-04-17T03:03:22.519141107Z" level=info msg="CreateContainer within sandbox \"4590d00110aae81f5b4d75a49c410361e7946bb5c68b47b3e4401fb73860218d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"31fe291af06e1a8e414dde0924f5699825088f750b2c782b6239bc7ca0a52172\"" Apr 17 03:03:22.519839 containerd[1572]: time="2026-04-17T03:03:22.519785863Z" level=info msg="StartContainer for \"31fe291af06e1a8e414dde0924f5699825088f750b2c782b6239bc7ca0a52172\"" Apr 17 03:03:22.520967 containerd[1572]: time="2026-04-17T03:03:22.520431958Z" level=info msg="connecting to shim 31fe291af06e1a8e414dde0924f5699825088f750b2c782b6239bc7ca0a52172" address="unix:///run/containerd/s/cc3043e1e87a08fc0e9aa406ea590b97f922670528af932e3d537fc20f0f046e" protocol=ttrpc version=3 Apr 17 03:03:22.538051 systemd[1]: Started cri-containerd-31fe291af06e1a8e414dde0924f5699825088f750b2c782b6239bc7ca0a52172.scope - libcontainer container 31fe291af06e1a8e414dde0924f5699825088f750b2c782b6239bc7ca0a52172. Apr 17 03:03:22.559254 containerd[1572]: time="2026-04-17T03:03:22.559199094Z" level=info msg="StartContainer for \"31fe291af06e1a8e414dde0924f5699825088f750b2c782b6239bc7ca0a52172\" returns successfully" Apr 17 03:03:22.783187 kubelet[2725]: I0417 03:03:22.783012 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5tp9w" podStartSLOduration=2.782803059 podStartE2EDuration="2.782803059s" podCreationTimestamp="2026-04-17 03:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 03:03:20.785563249 +0000 UTC m=+8.107581191" watchObservedRunningTime="2026-04-17 03:03:22.782803059 +0000 UTC m=+10.104821009" Apr 17 03:03:22.783187 kubelet[2725]: I0417 03:03:22.783134 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-vsr8w" podStartSLOduration=0.878841508 podStartE2EDuration="2.783124778s" podCreationTimestamp="2026-04-17 03:03:20 +0000 UTC" firstStartedPulling="2026-04-17 03:03:20.599999965 +0000 UTC m=+7.922017906" lastFinishedPulling="2026-04-17 03:03:22.504283244 +0000 UTC m=+9.826301176" observedRunningTime="2026-04-17 03:03:22.782685632 +0000 UTC m=+10.104703578" watchObservedRunningTime="2026-04-17 03:03:22.783124778 +0000 UTC m=+10.105142719" Apr 17 03:03:23.171394 kubelet[2725]: E0417 03:03:23.171253 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:23.776937 kubelet[2725]: E0417 03:03:23.776839 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:27.450418 sudo[1778]: pam_unix(sudo:session): session closed for user root Apr 17 03:03:27.451348 sshd[1777]: Connection closed by 10.0.0.1 port 59516 Apr 17 03:03:27.452210 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Apr 17 03:03:27.456303 systemd-logind[1551]: Session 7 logged out. Waiting for processes to exit. Apr 17 03:03:27.457008 systemd[1]: sshd@6-10.0.0.7:22-10.0.0.1:59516.service: Deactivated successfully. Apr 17 03:03:27.459504 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 03:03:27.459789 systemd[1]: session-7.scope: Consumed 3.450s CPU time, 227.3M memory peak. Apr 17 03:03:27.461999 systemd-logind[1551]: Removed session 7. Apr 17 03:03:28.244429 kubelet[2725]: E0417 03:03:28.244375 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:28.977237 systemd[1]: Created slice kubepods-besteffort-podc883cb69_6b86_45fb_806d_e25d9345b304.slice - libcontainer container kubepods-besteffort-podc883cb69_6b86_45fb_806d_e25d9345b304.slice. Apr 17 03:03:29.015878 systemd[1]: Created slice kubepods-besteffort-podc3794f97_4747_4f08_89bc_faf60c01c4f2.slice - libcontainer container kubepods-besteffort-podc3794f97_4747_4f08_89bc_faf60c01c4f2.slice. Apr 17 03:03:29.039593 kubelet[2725]: I0417 03:03:29.039546 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c3794f97-4747-4f08-89bc-faf60c01c4f2-node-certs\") pod \"calico-node-bq56h\" (UID: \"c3794f97-4747-4f08-89bc-faf60c01c4f2\") " pod="calico-system/calico-node-bq56h" Apr 17 03:03:29.039593 kubelet[2725]: I0417 03:03:29.039602 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c3794f97-4747-4f08-89bc-faf60c01c4f2-policysync\") pod \"calico-node-bq56h\" (UID: \"c3794f97-4747-4f08-89bc-faf60c01c4f2\") " pod="calico-system/calico-node-bq56h" Apr 17 03:03:29.039593 kubelet[2725]: I0417 03:03:29.039623 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbgpr\" (UniqueName: \"kubernetes.io/projected/c3794f97-4747-4f08-89bc-faf60c01c4f2-kube-api-access-bbgpr\") pod \"calico-node-bq56h\" (UID: \"c3794f97-4747-4f08-89bc-faf60c01c4f2\") " pod="calico-system/calico-node-bq56h" Apr 17 03:03:29.039593 kubelet[2725]: I0417 03:03:29.039637 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c3794f97-4747-4f08-89bc-faf60c01c4f2-cni-log-dir\") pod \"calico-node-bq56h\" (UID: \"c3794f97-4747-4f08-89bc-faf60c01c4f2\") " pod="calico-system/calico-node-bq56h" Apr 17 03:03:29.040142 kubelet[2725]: I0417 03:03:29.039651 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c883cb69-6b86-45fb-806d-e25d9345b304-typha-certs\") pod \"calico-typha-8667f874f-5bdbn\" (UID: \"c883cb69-6b86-45fb-806d-e25d9345b304\") " pod="calico-system/calico-typha-8667f874f-5bdbn" Apr 17 03:03:29.040142 kubelet[2725]: I0417 03:03:29.039691 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c3794f97-4747-4f08-89bc-faf60c01c4f2-cni-net-dir\") pod \"calico-node-bq56h\" (UID: \"c3794f97-4747-4f08-89bc-faf60c01c4f2\") " pod="calico-system/calico-node-bq56h" Apr 17 03:03:29.040142 kubelet[2725]: I0417 03:03:29.039702 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3794f97-4747-4f08-89bc-faf60c01c4f2-tigera-ca-bundle\") pod \"calico-node-bq56h\" (UID: \"c3794f97-4747-4f08-89bc-faf60c01c4f2\") " pod="calico-system/calico-node-bq56h" Apr 17 03:03:29.040142 kubelet[2725]: I0417 03:03:29.039752 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c3794f97-4747-4f08-89bc-faf60c01c4f2-var-lib-calico\") pod \"calico-node-bq56h\" (UID: \"c3794f97-4747-4f08-89bc-faf60c01c4f2\") " pod="calico-system/calico-node-bq56h" Apr 17 03:03:29.040142 kubelet[2725]: I0417 03:03:29.039782 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/c3794f97-4747-4f08-89bc-faf60c01c4f2-nodeproc\") pod \"calico-node-bq56h\" (UID: \"c3794f97-4747-4f08-89bc-faf60c01c4f2\") " pod="calico-system/calico-node-bq56h" Apr 17 03:03:29.041524 kubelet[2725]: I0417 03:03:29.039796 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/c3794f97-4747-4f08-89bc-faf60c01c4f2-sys-fs\") pod \"calico-node-bq56h\" (UID: \"c3794f97-4747-4f08-89bc-faf60c01c4f2\") " pod="calico-system/calico-node-bq56h" Apr 17 03:03:29.041524 kubelet[2725]: I0417 03:03:29.039814 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7wst\" (UniqueName: \"kubernetes.io/projected/c883cb69-6b86-45fb-806d-e25d9345b304-kube-api-access-g7wst\") pod \"calico-typha-8667f874f-5bdbn\" (UID: \"c883cb69-6b86-45fb-806d-e25d9345b304\") " pod="calico-system/calico-typha-8667f874f-5bdbn" Apr 17 03:03:29.041524 kubelet[2725]: I0417 03:03:29.039828 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c3794f97-4747-4f08-89bc-faf60c01c4f2-cni-bin-dir\") pod \"calico-node-bq56h\" (UID: \"c3794f97-4747-4f08-89bc-faf60c01c4f2\") " pod="calico-system/calico-node-bq56h" Apr 17 03:03:29.041524 kubelet[2725]: I0417 03:03:29.039841 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c3794f97-4747-4f08-89bc-faf60c01c4f2-xtables-lock\") pod \"calico-node-bq56h\" (UID: \"c3794f97-4747-4f08-89bc-faf60c01c4f2\") " pod="calico-system/calico-node-bq56h" Apr 17 03:03:29.041524 kubelet[2725]: I0417 03:03:29.039853 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c3794f97-4747-4f08-89bc-faf60c01c4f2-lib-modules\") pod \"calico-node-bq56h\" (UID: \"c3794f97-4747-4f08-89bc-faf60c01c4f2\") " pod="calico-system/calico-node-bq56h" Apr 17 03:03:29.041684 kubelet[2725]: I0417 03:03:29.039881 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c883cb69-6b86-45fb-806d-e25d9345b304-tigera-ca-bundle\") pod \"calico-typha-8667f874f-5bdbn\" (UID: \"c883cb69-6b86-45fb-806d-e25d9345b304\") " pod="calico-system/calico-typha-8667f874f-5bdbn" Apr 17 03:03:29.041684 kubelet[2725]: I0417 03:03:29.039937 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c3794f97-4747-4f08-89bc-faf60c01c4f2-flexvol-driver-host\") pod \"calico-node-bq56h\" (UID: \"c3794f97-4747-4f08-89bc-faf60c01c4f2\") " pod="calico-system/calico-node-bq56h" Apr 17 03:03:29.041684 kubelet[2725]: I0417 03:03:29.039952 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c3794f97-4747-4f08-89bc-faf60c01c4f2-var-run-calico\") pod \"calico-node-bq56h\" (UID: \"c3794f97-4747-4f08-89bc-faf60c01c4f2\") " pod="calico-system/calico-node-bq56h" Apr 17 03:03:29.041684 kubelet[2725]: I0417 03:03:29.039973 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/c3794f97-4747-4f08-89bc-faf60c01c4f2-bpffs\") pod \"calico-node-bq56h\" (UID: \"c3794f97-4747-4f08-89bc-faf60c01c4f2\") " pod="calico-system/calico-node-bq56h" Apr 17 03:03:29.129314 kubelet[2725]: E0417 03:03:29.128880 2725 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5pbx6" podUID="484b28cf-729f-4fde-afb9-2d7a3393fea4" Apr 17 03:03:29.140755 kubelet[2725]: I0417 03:03:29.140296 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/484b28cf-729f-4fde-afb9-2d7a3393fea4-registration-dir\") pod \"csi-node-driver-5pbx6\" (UID: \"484b28cf-729f-4fde-afb9-2d7a3393fea4\") " pod="calico-system/csi-node-driver-5pbx6" Apr 17 03:03:29.140755 kubelet[2725]: I0417 03:03:29.140332 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/484b28cf-729f-4fde-afb9-2d7a3393fea4-socket-dir\") pod \"csi-node-driver-5pbx6\" (UID: \"484b28cf-729f-4fde-afb9-2d7a3393fea4\") " pod="calico-system/csi-node-driver-5pbx6" Apr 17 03:03:29.140755 kubelet[2725]: I0417 03:03:29.140379 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/484b28cf-729f-4fde-afb9-2d7a3393fea4-varrun\") pod \"csi-node-driver-5pbx6\" (UID: \"484b28cf-729f-4fde-afb9-2d7a3393fea4\") " pod="calico-system/csi-node-driver-5pbx6" Apr 17 03:03:29.140755 kubelet[2725]: I0417 03:03:29.140451 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84qm9\" (UniqueName: \"kubernetes.io/projected/484b28cf-729f-4fde-afb9-2d7a3393fea4-kube-api-access-84qm9\") pod \"csi-node-driver-5pbx6\" (UID: \"484b28cf-729f-4fde-afb9-2d7a3393fea4\") " pod="calico-system/csi-node-driver-5pbx6" Apr 17 03:03:29.140755 kubelet[2725]: I0417 03:03:29.140468 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/484b28cf-729f-4fde-afb9-2d7a3393fea4-kubelet-dir\") pod \"csi-node-driver-5pbx6\" (UID: \"484b28cf-729f-4fde-afb9-2d7a3393fea4\") " pod="calico-system/csi-node-driver-5pbx6" Apr 17 03:03:29.143597 kubelet[2725]: E0417 03:03:29.143374 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.143597 kubelet[2725]: W0417 03:03:29.143395 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.143597 kubelet[2725]: E0417 03:03:29.143431 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.145141 kubelet[2725]: E0417 03:03:29.145106 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.146043 kubelet[2725]: W0417 03:03:29.146028 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.146132 kubelet[2725]: E0417 03:03:29.146119 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.147461 kubelet[2725]: E0417 03:03:29.146330 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.147461 kubelet[2725]: W0417 03:03:29.147434 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.147461 kubelet[2725]: E0417 03:03:29.147450 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.151074 kubelet[2725]: E0417 03:03:29.151048 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.151459 kubelet[2725]: W0417 03:03:29.151290 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.151459 kubelet[2725]: E0417 03:03:29.151325 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.153733 kubelet[2725]: E0417 03:03:29.153386 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.153733 kubelet[2725]: W0417 03:03:29.153397 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.153733 kubelet[2725]: E0417 03:03:29.153406 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.156065 kubelet[2725]: E0417 03:03:29.156046 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.156065 kubelet[2725]: W0417 03:03:29.156058 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.156166 kubelet[2725]: E0417 03:03:29.156067 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.156238 kubelet[2725]: E0417 03:03:29.156205 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.156238 kubelet[2725]: W0417 03:03:29.156224 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.156238 kubelet[2725]: E0417 03:03:29.156231 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.156989 kubelet[2725]: E0417 03:03:29.156370 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.156989 kubelet[2725]: W0417 03:03:29.156385 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.156989 kubelet[2725]: E0417 03:03:29.156393 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.158995 kubelet[2725]: E0417 03:03:29.158020 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.158995 kubelet[2725]: W0417 03:03:29.158047 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.158995 kubelet[2725]: E0417 03:03:29.158069 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.158995 kubelet[2725]: E0417 03:03:29.158219 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.158995 kubelet[2725]: W0417 03:03:29.158224 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.158995 kubelet[2725]: E0417 03:03:29.158230 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.158995 kubelet[2725]: E0417 03:03:29.158335 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.158995 kubelet[2725]: W0417 03:03:29.158339 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.158995 kubelet[2725]: E0417 03:03:29.158345 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.158995 kubelet[2725]: E0417 03:03:29.158492 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.159214 kubelet[2725]: W0417 03:03:29.158497 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.159214 kubelet[2725]: E0417 03:03:29.158503 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.159214 kubelet[2725]: E0417 03:03:29.159024 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.159214 kubelet[2725]: W0417 03:03:29.159031 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.159214 kubelet[2725]: E0417 03:03:29.159038 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.161023 kubelet[2725]: E0417 03:03:29.159358 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.161023 kubelet[2725]: W0417 03:03:29.159380 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.161023 kubelet[2725]: E0417 03:03:29.159388 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.161023 kubelet[2725]: E0417 03:03:29.159519 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.161023 kubelet[2725]: W0417 03:03:29.159523 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.161023 kubelet[2725]: E0417 03:03:29.159528 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.161023 kubelet[2725]: E0417 03:03:29.159616 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.161023 kubelet[2725]: W0417 03:03:29.159620 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.161023 kubelet[2725]: E0417 03:03:29.159625 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.161023 kubelet[2725]: E0417 03:03:29.159774 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.161329 kubelet[2725]: W0417 03:03:29.159779 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.161329 kubelet[2725]: E0417 03:03:29.159784 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.161329 kubelet[2725]: E0417 03:03:29.159877 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.161329 kubelet[2725]: W0417 03:03:29.159883 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.161329 kubelet[2725]: E0417 03:03:29.159888 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.161329 kubelet[2725]: E0417 03:03:29.160046 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.161329 kubelet[2725]: W0417 03:03:29.160052 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.161329 kubelet[2725]: E0417 03:03:29.160061 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.171106 kubelet[2725]: E0417 03:03:29.171063 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.171106 kubelet[2725]: W0417 03:03:29.171079 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.171106 kubelet[2725]: E0417 03:03:29.171111 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.242856 kubelet[2725]: E0417 03:03:29.242715 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.242856 kubelet[2725]: W0417 03:03:29.242754 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.242856 kubelet[2725]: E0417 03:03:29.242789 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.243793 kubelet[2725]: E0417 03:03:29.243689 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.243990 kubelet[2725]: W0417 03:03:29.243869 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.244039 kubelet[2725]: E0417 03:03:29.244009 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.244450 kubelet[2725]: E0417 03:03:29.244343 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.244450 kubelet[2725]: W0417 03:03:29.244358 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.244450 kubelet[2725]: E0417 03:03:29.244370 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.244854 kubelet[2725]: E0417 03:03:29.244775 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.244854 kubelet[2725]: W0417 03:03:29.244784 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.244854 kubelet[2725]: E0417 03:03:29.244792 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.245085 kubelet[2725]: E0417 03:03:29.245061 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.245085 kubelet[2725]: W0417 03:03:29.245077 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.245085 kubelet[2725]: E0417 03:03:29.245085 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.245323 kubelet[2725]: E0417 03:03:29.245291 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.245323 kubelet[2725]: W0417 03:03:29.245306 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.245323 kubelet[2725]: E0417 03:03:29.245313 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.245444 kubelet[2725]: E0417 03:03:29.245429 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.245444 kubelet[2725]: W0417 03:03:29.245440 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.245505 kubelet[2725]: E0417 03:03:29.245446 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.245640 kubelet[2725]: E0417 03:03:29.245624 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.245640 kubelet[2725]: W0417 03:03:29.245635 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.245640 kubelet[2725]: E0417 03:03:29.245640 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.245819 kubelet[2725]: E0417 03:03:29.245803 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.245819 kubelet[2725]: W0417 03:03:29.245813 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.245819 kubelet[2725]: E0417 03:03:29.245818 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.245999 kubelet[2725]: E0417 03:03:29.245968 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.245999 kubelet[2725]: W0417 03:03:29.245983 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.245999 kubelet[2725]: E0417 03:03:29.245992 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.246240 kubelet[2725]: E0417 03:03:29.246201 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.246240 kubelet[2725]: W0417 03:03:29.246218 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.246240 kubelet[2725]: E0417 03:03:29.246224 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.246547 kubelet[2725]: E0417 03:03:29.246527 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.246547 kubelet[2725]: W0417 03:03:29.246545 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.246611 kubelet[2725]: E0417 03:03:29.246562 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.247015 kubelet[2725]: E0417 03:03:29.246983 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.247015 kubelet[2725]: W0417 03:03:29.247001 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.247180 kubelet[2725]: E0417 03:03:29.247028 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.247180 kubelet[2725]: E0417 03:03:29.247174 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.247180 kubelet[2725]: W0417 03:03:29.247180 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.247255 kubelet[2725]: E0417 03:03:29.247187 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.247398 kubelet[2725]: E0417 03:03:29.247384 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.247398 kubelet[2725]: W0417 03:03:29.247395 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.247447 kubelet[2725]: E0417 03:03:29.247401 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.247767 kubelet[2725]: E0417 03:03:29.247687 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.247876 kubelet[2725]: W0417 03:03:29.247784 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.247876 kubelet[2725]: E0417 03:03:29.247813 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.248055 kubelet[2725]: E0417 03:03:29.248042 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.248055 kubelet[2725]: W0417 03:03:29.248055 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.248102 kubelet[2725]: E0417 03:03:29.248063 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.248514 kubelet[2725]: E0417 03:03:29.248465 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.248514 kubelet[2725]: W0417 03:03:29.248499 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.248718 kubelet[2725]: E0417 03:03:29.248531 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.248866 kubelet[2725]: E0417 03:03:29.248837 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.248866 kubelet[2725]: W0417 03:03:29.248858 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.248980 kubelet[2725]: E0417 03:03:29.248875 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.249149 kubelet[2725]: E0417 03:03:29.249058 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.249281 kubelet[2725]: W0417 03:03:29.249162 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.249281 kubelet[2725]: E0417 03:03:29.249200 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.249427 kubelet[2725]: E0417 03:03:29.249406 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.249427 kubelet[2725]: W0417 03:03:29.249419 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.249427 kubelet[2725]: E0417 03:03:29.249427 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.249588 kubelet[2725]: E0417 03:03:29.249571 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.249588 kubelet[2725]: W0417 03:03:29.249582 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.249645 kubelet[2725]: E0417 03:03:29.249589 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.249783 kubelet[2725]: E0417 03:03:29.249764 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.249783 kubelet[2725]: W0417 03:03:29.249779 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.249846 kubelet[2725]: E0417 03:03:29.249791 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.250052 kubelet[2725]: E0417 03:03:29.250034 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.250052 kubelet[2725]: W0417 03:03:29.250046 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.250147 kubelet[2725]: E0417 03:03:29.250053 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.250237 kubelet[2725]: E0417 03:03:29.250221 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.250237 kubelet[2725]: W0417 03:03:29.250233 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.250296 kubelet[2725]: E0417 03:03:29.250239 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.262280 kubelet[2725]: E0417 03:03:29.262227 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 03:03:29.262280 kubelet[2725]: W0417 03:03:29.262245 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 03:03:29.262280 kubelet[2725]: E0417 03:03:29.262290 2725 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 03:03:29.284894 kubelet[2725]: E0417 03:03:29.284755 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:29.285656 containerd[1572]: time="2026-04-17T03:03:29.285610683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8667f874f-5bdbn,Uid:c883cb69-6b86-45fb-806d-e25d9345b304,Namespace:calico-system,Attempt:0,}" Apr 17 03:03:29.321732 containerd[1572]: time="2026-04-17T03:03:29.321656649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bq56h,Uid:c3794f97-4747-4f08-89bc-faf60c01c4f2,Namespace:calico-system,Attempt:0,}" Apr 17 03:03:29.326542 containerd[1572]: time="2026-04-17T03:03:29.326474756Z" level=info msg="connecting to shim 5c1039413ed7a1a935f16a2bb70086d8a7394b9f353c153527a9b43f8ca12c01" address="unix:///run/containerd/s/d092a8c3970ac39738c4cf015d7109b2f6969b5404615cba565885d42169b979" namespace=k8s.io protocol=ttrpc version=3 Apr 17 03:03:29.338949 containerd[1572]: time="2026-04-17T03:03:29.338697827Z" level=info msg="connecting to shim 6854be8fb6f7c6d2c3c38e4f8e08a93f6cbb0cb976991fd432f6cbe83c897781" address="unix:///run/containerd/s/6202c4e15a5902107e7f9c950b2f7e6e40657a674fbded54b1f7efcab2265a48" namespace=k8s.io protocol=ttrpc version=3 Apr 17 03:03:29.365091 systemd[1]: Started cri-containerd-6854be8fb6f7c6d2c3c38e4f8e08a93f6cbb0cb976991fd432f6cbe83c897781.scope - libcontainer container 6854be8fb6f7c6d2c3c38e4f8e08a93f6cbb0cb976991fd432f6cbe83c897781. Apr 17 03:03:29.367362 systemd[1]: Started cri-containerd-5c1039413ed7a1a935f16a2bb70086d8a7394b9f353c153527a9b43f8ca12c01.scope - libcontainer container 5c1039413ed7a1a935f16a2bb70086d8a7394b9f353c153527a9b43f8ca12c01. Apr 17 03:03:29.387086 containerd[1572]: time="2026-04-17T03:03:29.387035647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bq56h,Uid:c3794f97-4747-4f08-89bc-faf60c01c4f2,Namespace:calico-system,Attempt:0,} returns sandbox id \"6854be8fb6f7c6d2c3c38e4f8e08a93f6cbb0cb976991fd432f6cbe83c897781\"" Apr 17 03:03:29.390217 containerd[1572]: time="2026-04-17T03:03:29.390149226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 17 03:03:29.406183 containerd[1572]: time="2026-04-17T03:03:29.406129899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8667f874f-5bdbn,Uid:c883cb69-6b86-45fb-806d-e25d9345b304,Namespace:calico-system,Attempt:0,} returns sandbox id \"5c1039413ed7a1a935f16a2bb70086d8a7394b9f353c153527a9b43f8ca12c01\"" Apr 17 03:03:29.406819 kubelet[2725]: E0417 03:03:29.406801 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:30.746620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2186315429.mount: Deactivated successfully. Apr 17 03:03:30.747931 kubelet[2725]: E0417 03:03:30.747721 2725 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5pbx6" podUID="484b28cf-729f-4fde-afb9-2d7a3393fea4" Apr 17 03:03:30.815376 containerd[1572]: time="2026-04-17T03:03:30.815317022Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:30.816099 containerd[1572]: time="2026-04-17T03:03:30.816066860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Apr 17 03:03:30.817422 containerd[1572]: time="2026-04-17T03:03:30.817371342Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:30.819656 containerd[1572]: time="2026-04-17T03:03:30.819597994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:30.820102 containerd[1572]: time="2026-04-17T03:03:30.820050230Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.429845471s" Apr 17 03:03:30.820102 containerd[1572]: time="2026-04-17T03:03:30.820090612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 17 03:03:30.821391 containerd[1572]: time="2026-04-17T03:03:30.821360597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 17 03:03:30.828229 containerd[1572]: time="2026-04-17T03:03:30.828192343Z" level=info msg="CreateContainer within sandbox \"6854be8fb6f7c6d2c3c38e4f8e08a93f6cbb0cb976991fd432f6cbe83c897781\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 17 03:03:30.837203 containerd[1572]: time="2026-04-17T03:03:30.835281336Z" level=info msg="Container 2e364b10c0b448e91a80d303d3e958b84165afa247c321ce1aadbeb143c96b6b: CDI devices from CRI Config.CDIDevices: []" Apr 17 03:03:30.843356 containerd[1572]: time="2026-04-17T03:03:30.843317938Z" level=info msg="CreateContainer within sandbox \"6854be8fb6f7c6d2c3c38e4f8e08a93f6cbb0cb976991fd432f6cbe83c897781\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2e364b10c0b448e91a80d303d3e958b84165afa247c321ce1aadbeb143c96b6b\"" Apr 17 03:03:30.843865 containerd[1572]: time="2026-04-17T03:03:30.843839343Z" level=info msg="StartContainer for \"2e364b10c0b448e91a80d303d3e958b84165afa247c321ce1aadbeb143c96b6b\"" Apr 17 03:03:30.844875 containerd[1572]: time="2026-04-17T03:03:30.844848587Z" level=info msg="connecting to shim 2e364b10c0b448e91a80d303d3e958b84165afa247c321ce1aadbeb143c96b6b" address="unix:///run/containerd/s/6202c4e15a5902107e7f9c950b2f7e6e40657a674fbded54b1f7efcab2265a48" protocol=ttrpc version=3 Apr 17 03:03:30.864247 systemd[1]: Started cri-containerd-2e364b10c0b448e91a80d303d3e958b84165afa247c321ce1aadbeb143c96b6b.scope - libcontainer container 2e364b10c0b448e91a80d303d3e958b84165afa247c321ce1aadbeb143c96b6b. Apr 17 03:03:30.946351 containerd[1572]: time="2026-04-17T03:03:30.946169524Z" level=info msg="StartContainer for \"2e364b10c0b448e91a80d303d3e958b84165afa247c321ce1aadbeb143c96b6b\" returns successfully" Apr 17 03:03:30.984427 systemd[1]: cri-containerd-2e364b10c0b448e91a80d303d3e958b84165afa247c321ce1aadbeb143c96b6b.scope: Deactivated successfully. Apr 17 03:03:30.987935 containerd[1572]: time="2026-04-17T03:03:30.987841032Z" level=info msg="received container exit event container_id:\"2e364b10c0b448e91a80d303d3e958b84165afa247c321ce1aadbeb143c96b6b\" id:\"2e364b10c0b448e91a80d303d3e958b84165afa247c321ce1aadbeb143c96b6b\" pid:3307 exited_at:{seconds:1776395010 nanos:987258739}" Apr 17 03:03:32.747447 kubelet[2725]: E0417 03:03:32.747346 2725 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5pbx6" podUID="484b28cf-729f-4fde-afb9-2d7a3393fea4" Apr 17 03:03:34.459018 containerd[1572]: time="2026-04-17T03:03:34.458968941Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:34.459720 containerd[1572]: time="2026-04-17T03:03:34.459685649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Apr 17 03:03:34.460753 containerd[1572]: time="2026-04-17T03:03:34.460723329Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:34.462527 containerd[1572]: time="2026-04-17T03:03:34.462482449Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:34.462947 containerd[1572]: time="2026-04-17T03:03:34.462876403Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 3.641460401s" Apr 17 03:03:34.462987 containerd[1572]: time="2026-04-17T03:03:34.462973895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 17 03:03:34.464113 containerd[1572]: time="2026-04-17T03:03:34.464042925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 17 03:03:34.473432 containerd[1572]: time="2026-04-17T03:03:34.473405300Z" level=info msg="CreateContainer within sandbox \"5c1039413ed7a1a935f16a2bb70086d8a7394b9f353c153527a9b43f8ca12c01\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 17 03:03:34.480242 containerd[1572]: time="2026-04-17T03:03:34.480201470Z" level=info msg="Container e4bf0e1ea2b83f695cf4dd0a62267ac5a85b5593541d8cfcd4e014a456cd752c: CDI devices from CRI Config.CDIDevices: []" Apr 17 03:03:34.487345 containerd[1572]: time="2026-04-17T03:03:34.487312626Z" level=info msg="CreateContainer within sandbox \"5c1039413ed7a1a935f16a2bb70086d8a7394b9f353c153527a9b43f8ca12c01\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e4bf0e1ea2b83f695cf4dd0a62267ac5a85b5593541d8cfcd4e014a456cd752c\"" Apr 17 03:03:34.487851 containerd[1572]: time="2026-04-17T03:03:34.487832620Z" level=info msg="StartContainer for \"e4bf0e1ea2b83f695cf4dd0a62267ac5a85b5593541d8cfcd4e014a456cd752c\"" Apr 17 03:03:34.488673 containerd[1572]: time="2026-04-17T03:03:34.488639714Z" level=info msg="connecting to shim e4bf0e1ea2b83f695cf4dd0a62267ac5a85b5593541d8cfcd4e014a456cd752c" address="unix:///run/containerd/s/d092a8c3970ac39738c4cf015d7109b2f6969b5404615cba565885d42169b979" protocol=ttrpc version=3 Apr 17 03:03:34.506093 systemd[1]: Started cri-containerd-e4bf0e1ea2b83f695cf4dd0a62267ac5a85b5593541d8cfcd4e014a456cd752c.scope - libcontainer container e4bf0e1ea2b83f695cf4dd0a62267ac5a85b5593541d8cfcd4e014a456cd752c. Apr 17 03:03:34.547659 containerd[1572]: time="2026-04-17T03:03:34.547612202Z" level=info msg="StartContainer for \"e4bf0e1ea2b83f695cf4dd0a62267ac5a85b5593541d8cfcd4e014a456cd752c\" returns successfully" Apr 17 03:03:34.748422 kubelet[2725]: E0417 03:03:34.748285 2725 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5pbx6" podUID="484b28cf-729f-4fde-afb9-2d7a3393fea4" Apr 17 03:03:34.813040 kubelet[2725]: E0417 03:03:34.813006 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:35.815697 kubelet[2725]: I0417 03:03:35.815631 2725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 03:03:35.817365 kubelet[2725]: E0417 03:03:35.816120 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:36.747831 kubelet[2725]: E0417 03:03:36.747757 2725 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5pbx6" podUID="484b28cf-729f-4fde-afb9-2d7a3393fea4" Apr 17 03:03:36.994607 update_engine[1554]: I20260417 03:03:36.994468 1554 update_attempter.cc:509] Updating boot flags... Apr 17 03:03:38.589043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2099420777.mount: Deactivated successfully. Apr 17 03:03:38.747174 kubelet[2725]: E0417 03:03:38.747088 2725 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5pbx6" podUID="484b28cf-729f-4fde-afb9-2d7a3393fea4" Apr 17 03:03:38.765767 containerd[1572]: time="2026-04-17T03:03:38.765716141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:38.766459 containerd[1572]: time="2026-04-17T03:03:38.766424066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 17 03:03:38.767569 containerd[1572]: time="2026-04-17T03:03:38.767526237Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:38.775277 containerd[1572]: time="2026-04-17T03:03:38.775194843Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:38.775776 containerd[1572]: time="2026-04-17T03:03:38.775708518Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 4.311643281s" Apr 17 03:03:38.775776 containerd[1572]: time="2026-04-17T03:03:38.775740236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 17 03:03:38.780749 containerd[1572]: time="2026-04-17T03:03:38.780704769Z" level=info msg="CreateContainer within sandbox \"6854be8fb6f7c6d2c3c38e4f8e08a93f6cbb0cb976991fd432f6cbe83c897781\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 17 03:03:38.833635 containerd[1572]: time="2026-04-17T03:03:38.833182785Z" level=info msg="Container 85623701f2b1096d1944822706a51d84d95ab017f4361e2819187b9ae02cc684: CDI devices from CRI Config.CDIDevices: []" Apr 17 03:03:38.868398 containerd[1572]: time="2026-04-17T03:03:38.868272647Z" level=info msg="CreateContainer within sandbox \"6854be8fb6f7c6d2c3c38e4f8e08a93f6cbb0cb976991fd432f6cbe83c897781\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"85623701f2b1096d1944822706a51d84d95ab017f4361e2819187b9ae02cc684\"" Apr 17 03:03:38.868786 containerd[1572]: time="2026-04-17T03:03:38.868756628Z" level=info msg="StartContainer for \"85623701f2b1096d1944822706a51d84d95ab017f4361e2819187b9ae02cc684\"" Apr 17 03:03:38.869774 containerd[1572]: time="2026-04-17T03:03:38.869734828Z" level=info msg="connecting to shim 85623701f2b1096d1944822706a51d84d95ab017f4361e2819187b9ae02cc684" address="unix:///run/containerd/s/6202c4e15a5902107e7f9c950b2f7e6e40657a674fbded54b1f7efcab2265a48" protocol=ttrpc version=3 Apr 17 03:03:38.887078 systemd[1]: Started cri-containerd-85623701f2b1096d1944822706a51d84d95ab017f4361e2819187b9ae02cc684.scope - libcontainer container 85623701f2b1096d1944822706a51d84d95ab017f4361e2819187b9ae02cc684. Apr 17 03:03:38.969949 containerd[1572]: time="2026-04-17T03:03:38.969590310Z" level=info msg="StartContainer for \"85623701f2b1096d1944822706a51d84d95ab017f4361e2819187b9ae02cc684\" returns successfully" Apr 17 03:03:38.997886 systemd[1]: cri-containerd-85623701f2b1096d1944822706a51d84d95ab017f4361e2819187b9ae02cc684.scope: Deactivated successfully. Apr 17 03:03:39.006804 containerd[1572]: time="2026-04-17T03:03:39.006720050Z" level=info msg="received container exit event container_id:\"85623701f2b1096d1944822706a51d84d95ab017f4361e2819187b9ae02cc684\" id:\"85623701f2b1096d1944822706a51d84d95ab017f4361e2819187b9ae02cc684\" pid:3424 exited_at:{seconds:1776395018 nanos:999311977}" Apr 17 03:03:39.588902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85623701f2b1096d1944822706a51d84d95ab017f4361e2819187b9ae02cc684-rootfs.mount: Deactivated successfully. Apr 17 03:03:39.828070 containerd[1572]: time="2026-04-17T03:03:39.828023827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 17 03:03:39.853011 kubelet[2725]: I0417 03:03:39.852645 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8667f874f-5bdbn" podStartSLOduration=6.796801368 podStartE2EDuration="11.85262588s" podCreationTimestamp="2026-04-17 03:03:28 +0000 UTC" firstStartedPulling="2026-04-17 03:03:29.407865008 +0000 UTC m=+16.729882941" lastFinishedPulling="2026-04-17 03:03:34.463689521 +0000 UTC m=+21.785707453" observedRunningTime="2026-04-17 03:03:34.832943895 +0000 UTC m=+22.154961835" watchObservedRunningTime="2026-04-17 03:03:39.85262588 +0000 UTC m=+27.174643840" Apr 17 03:03:40.747582 kubelet[2725]: E0417 03:03:40.747514 2725 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5pbx6" podUID="484b28cf-729f-4fde-afb9-2d7a3393fea4" Apr 17 03:03:42.747610 kubelet[2725]: E0417 03:03:42.747546 2725 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5pbx6" podUID="484b28cf-729f-4fde-afb9-2d7a3393fea4" Apr 17 03:03:42.930454 containerd[1572]: time="2026-04-17T03:03:42.930389897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:42.931071 containerd[1572]: time="2026-04-17T03:03:42.931043259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 17 03:03:42.932589 containerd[1572]: time="2026-04-17T03:03:42.932534625Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:42.935009 containerd[1572]: time="2026-04-17T03:03:42.934958838Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:42.935516 containerd[1572]: time="2026-04-17T03:03:42.935492194Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.107425392s" Apr 17 03:03:42.935560 containerd[1572]: time="2026-04-17T03:03:42.935521686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 17 03:03:42.939340 containerd[1572]: time="2026-04-17T03:03:42.939300085Z" level=info msg="CreateContainer within sandbox \"6854be8fb6f7c6d2c3c38e4f8e08a93f6cbb0cb976991fd432f6cbe83c897781\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 17 03:03:42.946081 containerd[1572]: time="2026-04-17T03:03:42.946048596Z" level=info msg="Container 801d8864cd250fee6a91f820f90e83e5fe445c0d92d40547557265c252fb4658: CDI devices from CRI Config.CDIDevices: []" Apr 17 03:03:42.954256 containerd[1572]: time="2026-04-17T03:03:42.954208382Z" level=info msg="CreateContainer within sandbox \"6854be8fb6f7c6d2c3c38e4f8e08a93f6cbb0cb976991fd432f6cbe83c897781\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"801d8864cd250fee6a91f820f90e83e5fe445c0d92d40547557265c252fb4658\"" Apr 17 03:03:42.954744 containerd[1572]: time="2026-04-17T03:03:42.954692600Z" level=info msg="StartContainer for \"801d8864cd250fee6a91f820f90e83e5fe445c0d92d40547557265c252fb4658\"" Apr 17 03:03:42.955936 containerd[1572]: time="2026-04-17T03:03:42.955661391Z" level=info msg="connecting to shim 801d8864cd250fee6a91f820f90e83e5fe445c0d92d40547557265c252fb4658" address="unix:///run/containerd/s/6202c4e15a5902107e7f9c950b2f7e6e40657a674fbded54b1f7efcab2265a48" protocol=ttrpc version=3 Apr 17 03:03:42.974082 systemd[1]: Started cri-containerd-801d8864cd250fee6a91f820f90e83e5fe445c0d92d40547557265c252fb4658.scope - libcontainer container 801d8864cd250fee6a91f820f90e83e5fe445c0d92d40547557265c252fb4658. Apr 17 03:03:43.028583 containerd[1572]: time="2026-04-17T03:03:43.028112039Z" level=info msg="StartContainer for \"801d8864cd250fee6a91f820f90e83e5fe445c0d92d40547557265c252fb4658\" returns successfully" Apr 17 03:03:43.448371 systemd[1]: cri-containerd-801d8864cd250fee6a91f820f90e83e5fe445c0d92d40547557265c252fb4658.scope: Deactivated successfully. Apr 17 03:03:43.449416 systemd[1]: cri-containerd-801d8864cd250fee6a91f820f90e83e5fe445c0d92d40547557265c252fb4658.scope: Consumed 454ms CPU time, 180.1M memory peak, 2.5M read from disk, 177M written to disk. Apr 17 03:03:43.451214 containerd[1572]: time="2026-04-17T03:03:43.451107546Z" level=info msg="received container exit event container_id:\"801d8864cd250fee6a91f820f90e83e5fe445c0d92d40547557265c252fb4658\" id:\"801d8864cd250fee6a91f820f90e83e5fe445c0d92d40547557265c252fb4658\" pid:3482 exited_at:{seconds:1776395023 nanos:450544833}" Apr 17 03:03:43.472601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-801d8864cd250fee6a91f820f90e83e5fe445c0d92d40547557265c252fb4658-rootfs.mount: Deactivated successfully. Apr 17 03:03:43.477346 kubelet[2725]: I0417 03:03:43.477289 2725 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 17 03:03:43.541814 systemd[1]: Created slice kubepods-burstable-pod1fe8bbc7_3295_4ad2_bdf8_87ef564a3bb1.slice - libcontainer container kubepods-burstable-pod1fe8bbc7_3295_4ad2_bdf8_87ef564a3bb1.slice. Apr 17 03:03:43.549797 systemd[1]: Created slice kubepods-burstable-podc2693bff_f36f_4e79_8b27_a87e39664a97.slice - libcontainer container kubepods-burstable-podc2693bff_f36f_4e79_8b27_a87e39664a97.slice. Apr 17 03:03:43.560652 systemd[1]: Created slice kubepods-besteffort-podb8e9572e_cd21_4783_81eb_03cb12ebcc87.slice - libcontainer container kubepods-besteffort-podb8e9572e_cd21_4783_81eb_03cb12ebcc87.slice. Apr 17 03:03:43.568654 systemd[1]: Created slice kubepods-besteffort-poda023fec7_9b08_45e6_b187_85e88df49048.slice - libcontainer container kubepods-besteffort-poda023fec7_9b08_45e6_b187_85e88df49048.slice. Apr 17 03:03:43.573564 systemd[1]: Created slice kubepods-besteffort-podec24544f_dd99_4880_8240_7915f1266d12.slice - libcontainer container kubepods-besteffort-podec24544f_dd99_4880_8240_7915f1266d12.slice. Apr 17 03:03:43.579581 systemd[1]: Created slice kubepods-besteffort-pod8bf2c43b_809d_4d53_8950_e87873e687fe.slice - libcontainer container kubepods-besteffort-pod8bf2c43b_809d_4d53_8950_e87873e687fe.slice. Apr 17 03:03:43.585195 systemd[1]: Created slice kubepods-besteffort-pode24d71c5_7063_40d8_9b55_b8ca8f5e8578.slice - libcontainer container kubepods-besteffort-pode24d71c5_7063_40d8_9b55_b8ca8f5e8578.slice. Apr 17 03:03:43.666025 kubelet[2725]: I0417 03:03:43.665928 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8bf2c43b-809d-4d53-8950-e87873e687fe-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-5h9fm\" (UID: \"8bf2c43b-809d-4d53-8950-e87873e687fe\") " pod="calico-system/goldmane-cccfbd5cf-5h9fm" Apr 17 03:03:43.666025 kubelet[2725]: I0417 03:03:43.665975 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n58w9\" (UniqueName: \"kubernetes.io/projected/8bf2c43b-809d-4d53-8950-e87873e687fe-kube-api-access-n58w9\") pod \"goldmane-cccfbd5cf-5h9fm\" (UID: \"8bf2c43b-809d-4d53-8950-e87873e687fe\") " pod="calico-system/goldmane-cccfbd5cf-5h9fm" Apr 17 03:03:43.666025 kubelet[2725]: I0417 03:03:43.666002 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a023fec7-9b08-45e6-b187-85e88df49048-calico-apiserver-certs\") pod \"calico-apiserver-57d76c7b76-rl429\" (UID: \"a023fec7-9b08-45e6-b187-85e88df49048\") " pod="calico-system/calico-apiserver-57d76c7b76-rl429" Apr 17 03:03:43.666025 kubelet[2725]: I0417 03:03:43.666015 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2693bff-f36f-4e79-8b27-a87e39664a97-config-volume\") pod \"coredns-66bc5c9577-pnqxl\" (UID: \"c2693bff-f36f-4e79-8b27-a87e39664a97\") " pod="kube-system/coredns-66bc5c9577-pnqxl" Apr 17 03:03:43.666025 kubelet[2725]: I0417 03:03:43.666032 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bf2c43b-809d-4d53-8950-e87873e687fe-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-5h9fm\" (UID: \"8bf2c43b-809d-4d53-8950-e87873e687fe\") " pod="calico-system/goldmane-cccfbd5cf-5h9fm" Apr 17 03:03:43.666494 kubelet[2725]: I0417 03:03:43.666054 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b8e9572e-cd21-4783-81eb-03cb12ebcc87-calico-apiserver-certs\") pod \"calico-apiserver-57d76c7b76-gmmgs\" (UID: \"b8e9572e-cd21-4783-81eb-03cb12ebcc87\") " pod="calico-system/calico-apiserver-57d76c7b76-gmmgs" Apr 17 03:03:43.666494 kubelet[2725]: I0417 03:03:43.666067 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbnkq\" (UniqueName: \"kubernetes.io/projected/b8e9572e-cd21-4783-81eb-03cb12ebcc87-kube-api-access-fbnkq\") pod \"calico-apiserver-57d76c7b76-gmmgs\" (UID: \"b8e9572e-cd21-4783-81eb-03cb12ebcc87\") " pod="calico-system/calico-apiserver-57d76c7b76-gmmgs" Apr 17 03:03:43.666494 kubelet[2725]: I0417 03:03:43.666148 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lffvd\" (UniqueName: \"kubernetes.io/projected/e24d71c5-7063-40d8-9b55-b8ca8f5e8578-kube-api-access-lffvd\") pod \"calico-kube-controllers-c57849b64-mlx2h\" (UID: \"e24d71c5-7063-40d8-9b55-b8ca8f5e8578\") " pod="calico-system/calico-kube-controllers-c57849b64-mlx2h" Apr 17 03:03:43.666494 kubelet[2725]: I0417 03:03:43.666181 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbh8w\" (UniqueName: \"kubernetes.io/projected/a023fec7-9b08-45e6-b187-85e88df49048-kube-api-access-sbh8w\") pod \"calico-apiserver-57d76c7b76-rl429\" (UID: \"a023fec7-9b08-45e6-b187-85e88df49048\") " pod="calico-system/calico-apiserver-57d76c7b76-rl429" Apr 17 03:03:43.666494 kubelet[2725]: I0417 03:03:43.666221 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ec24544f-dd99-4880-8240-7915f1266d12-whisker-backend-key-pair\") pod \"whisker-bfd4644d7-dftk2\" (UID: \"ec24544f-dd99-4880-8240-7915f1266d12\") " pod="calico-system/whisker-bfd4644d7-dftk2" Apr 17 03:03:43.666664 kubelet[2725]: I0417 03:03:43.666234 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec24544f-dd99-4880-8240-7915f1266d12-whisker-ca-bundle\") pod \"whisker-bfd4644d7-dftk2\" (UID: \"ec24544f-dd99-4880-8240-7915f1266d12\") " pod="calico-system/whisker-bfd4644d7-dftk2" Apr 17 03:03:43.666664 kubelet[2725]: I0417 03:03:43.666273 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e24d71c5-7063-40d8-9b55-b8ca8f5e8578-tigera-ca-bundle\") pod \"calico-kube-controllers-c57849b64-mlx2h\" (UID: \"e24d71c5-7063-40d8-9b55-b8ca8f5e8578\") " pod="calico-system/calico-kube-controllers-c57849b64-mlx2h" Apr 17 03:03:43.666664 kubelet[2725]: I0417 03:03:43.666299 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1fe8bbc7-3295-4ad2-bdf8-87ef564a3bb1-config-volume\") pod \"coredns-66bc5c9577-sdh6t\" (UID: \"1fe8bbc7-3295-4ad2-bdf8-87ef564a3bb1\") " pod="kube-system/coredns-66bc5c9577-sdh6t" Apr 17 03:03:43.666664 kubelet[2725]: I0417 03:03:43.666314 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mxrr\" (UniqueName: \"kubernetes.io/projected/1fe8bbc7-3295-4ad2-bdf8-87ef564a3bb1-kube-api-access-6mxrr\") pod \"coredns-66bc5c9577-sdh6t\" (UID: \"1fe8bbc7-3295-4ad2-bdf8-87ef564a3bb1\") " pod="kube-system/coredns-66bc5c9577-sdh6t" Apr 17 03:03:43.666664 kubelet[2725]: I0417 03:03:43.666330 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ec24544f-dd99-4880-8240-7915f1266d12-nginx-config\") pod \"whisker-bfd4644d7-dftk2\" (UID: \"ec24544f-dd99-4880-8240-7915f1266d12\") " pod="calico-system/whisker-bfd4644d7-dftk2" Apr 17 03:03:43.666859 kubelet[2725]: I0417 03:03:43.666357 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlmdq\" (UniqueName: \"kubernetes.io/projected/ec24544f-dd99-4880-8240-7915f1266d12-kube-api-access-wlmdq\") pod \"whisker-bfd4644d7-dftk2\" (UID: \"ec24544f-dd99-4880-8240-7915f1266d12\") " pod="calico-system/whisker-bfd4644d7-dftk2" Apr 17 03:03:43.666859 kubelet[2725]: I0417 03:03:43.666371 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvbls\" (UniqueName: \"kubernetes.io/projected/c2693bff-f36f-4e79-8b27-a87e39664a97-kube-api-access-nvbls\") pod \"coredns-66bc5c9577-pnqxl\" (UID: \"c2693bff-f36f-4e79-8b27-a87e39664a97\") " pod="kube-system/coredns-66bc5c9577-pnqxl" Apr 17 03:03:43.666859 kubelet[2725]: I0417 03:03:43.666392 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bf2c43b-809d-4d53-8950-e87873e687fe-config\") pod \"goldmane-cccfbd5cf-5h9fm\" (UID: \"8bf2c43b-809d-4d53-8950-e87873e687fe\") " pod="calico-system/goldmane-cccfbd5cf-5h9fm" Apr 17 03:03:43.848087 kubelet[2725]: E0417 03:03:43.847997 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:43.849352 containerd[1572]: time="2026-04-17T03:03:43.849310673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sdh6t,Uid:1fe8bbc7-3295-4ad2-bdf8-87ef564a3bb1,Namespace:kube-system,Attempt:0,}" Apr 17 03:03:43.852156 containerd[1572]: time="2026-04-17T03:03:43.852087325Z" level=info msg="CreateContainer within sandbox \"6854be8fb6f7c6d2c3c38e4f8e08a93f6cbb0cb976991fd432f6cbe83c897781\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 17 03:03:43.859953 kubelet[2725]: E0417 03:03:43.859662 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:43.860180 containerd[1572]: time="2026-04-17T03:03:43.860157993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pnqxl,Uid:c2693bff-f36f-4e79-8b27-a87e39664a97,Namespace:kube-system,Attempt:0,}" Apr 17 03:03:43.868351 containerd[1572]: time="2026-04-17T03:03:43.868288051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d76c7b76-gmmgs,Uid:b8e9572e-cd21-4783-81eb-03cb12ebcc87,Namespace:calico-system,Attempt:0,}" Apr 17 03:03:43.868934 containerd[1572]: time="2026-04-17T03:03:43.868878469Z" level=info msg="Container 66cadbabcc8a64ed53c0a1f00b265511985286ff91d36183e8403f28bcadf52d: CDI devices from CRI Config.CDIDevices: []" Apr 17 03:03:43.877091 containerd[1572]: time="2026-04-17T03:03:43.877060683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d76c7b76-rl429,Uid:a023fec7-9b08-45e6-b187-85e88df49048,Namespace:calico-system,Attempt:0,}" Apr 17 03:03:43.880499 containerd[1572]: time="2026-04-17T03:03:43.880398508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bfd4644d7-dftk2,Uid:ec24544f-dd99-4880-8240-7915f1266d12,Namespace:calico-system,Attempt:0,}" Apr 17 03:03:43.888824 containerd[1572]: time="2026-04-17T03:03:43.888574336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-5h9fm,Uid:8bf2c43b-809d-4d53-8950-e87873e687fe,Namespace:calico-system,Attempt:0,}" Apr 17 03:03:43.891473 containerd[1572]: time="2026-04-17T03:03:43.891255225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c57849b64-mlx2h,Uid:e24d71c5-7063-40d8-9b55-b8ca8f5e8578,Namespace:calico-system,Attempt:0,}" Apr 17 03:03:43.897821 containerd[1572]: time="2026-04-17T03:03:43.897775225Z" level=info msg="CreateContainer within sandbox \"6854be8fb6f7c6d2c3c38e4f8e08a93f6cbb0cb976991fd432f6cbe83c897781\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"66cadbabcc8a64ed53c0a1f00b265511985286ff91d36183e8403f28bcadf52d\"" Apr 17 03:03:43.900293 containerd[1572]: time="2026-04-17T03:03:43.900264751Z" level=info msg="StartContainer for \"66cadbabcc8a64ed53c0a1f00b265511985286ff91d36183e8403f28bcadf52d\"" Apr 17 03:03:43.903790 containerd[1572]: time="2026-04-17T03:03:43.903488276Z" level=info msg="connecting to shim 66cadbabcc8a64ed53c0a1f00b265511985286ff91d36183e8403f28bcadf52d" address="unix:///run/containerd/s/6202c4e15a5902107e7f9c950b2f7e6e40657a674fbded54b1f7efcab2265a48" protocol=ttrpc version=3 Apr 17 03:03:43.932904 systemd[1]: Started cri-containerd-66cadbabcc8a64ed53c0a1f00b265511985286ff91d36183e8403f28bcadf52d.scope - libcontainer container 66cadbabcc8a64ed53c0a1f00b265511985286ff91d36183e8403f28bcadf52d. Apr 17 03:03:44.009134 containerd[1572]: time="2026-04-17T03:03:44.009078074Z" level=error msg="Failed to destroy network for sandbox \"7e232a06174a9c0dade76a582a9b2db8c18316328f238e8c6656c657a962b9ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 03:03:44.012130 systemd[1]: run-netns-cni\x2d9c772c64\x2da59c\x2d2c7a\x2d4e6a\x2ddce29d8b80a5.mount: Deactivated successfully. Apr 17 03:03:44.014482 containerd[1572]: time="2026-04-17T03:03:44.014402187Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sdh6t,Uid:1fe8bbc7-3295-4ad2-bdf8-87ef564a3bb1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e232a06174a9c0dade76a582a9b2db8c18316328f238e8c6656c657a962b9ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 03:03:44.022783 kubelet[2725]: E0417 03:03:44.022720 2725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e232a06174a9c0dade76a582a9b2db8c18316328f238e8c6656c657a962b9ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 03:03:44.022889 kubelet[2725]: E0417 03:03:44.022812 2725 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e232a06174a9c0dade76a582a9b2db8c18316328f238e8c6656c657a962b9ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-sdh6t" Apr 17 03:03:44.022889 kubelet[2725]: E0417 03:03:44.022832 2725 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e232a06174a9c0dade76a582a9b2db8c18316328f238e8c6656c657a962b9ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-sdh6t" Apr 17 03:03:44.022959 kubelet[2725]: E0417 03:03:44.022881 2725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-sdh6t_kube-system(1fe8bbc7-3295-4ad2-bdf8-87ef564a3bb1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-sdh6t_kube-system(1fe8bbc7-3295-4ad2-bdf8-87ef564a3bb1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e232a06174a9c0dade76a582a9b2db8c18316328f238e8c6656c657a962b9ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-sdh6t" podUID="1fe8bbc7-3295-4ad2-bdf8-87ef564a3bb1" Apr 17 03:03:44.026968 containerd[1572]: time="2026-04-17T03:03:44.026867218Z" level=error msg="Failed to destroy network for sandbox \"a64a84b9f60da2cf92e630a8c329b0b561964c10dc2c0a6e76be6030c084e7c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 03:03:44.030079 systemd[1]: run-netns-cni\x2de0283498\x2dfcbb\x2d899c\x2deb68\x2d9185235baeab.mount: Deactivated successfully. Apr 17 03:03:44.031558 containerd[1572]: time="2026-04-17T03:03:44.031528283Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-5h9fm,Uid:8bf2c43b-809d-4d53-8950-e87873e687fe,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a64a84b9f60da2cf92e630a8c329b0b561964c10dc2c0a6e76be6030c084e7c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 03:03:44.031952 kubelet[2725]: E0417 03:03:44.031896 2725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a64a84b9f60da2cf92e630a8c329b0b561964c10dc2c0a6e76be6030c084e7c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 03:03:44.032096 kubelet[2725]: E0417 03:03:44.032035 2725 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a64a84b9f60da2cf92e630a8c329b0b561964c10dc2c0a6e76be6030c084e7c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-5h9fm" Apr 17 03:03:44.032096 kubelet[2725]: E0417 03:03:44.032054 2725 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a64a84b9f60da2cf92e630a8c329b0b561964c10dc2c0a6e76be6030c084e7c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-5h9fm" Apr 17 03:03:44.032427 kubelet[2725]: E0417 03:03:44.032269 2725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-5h9fm_calico-system(8bf2c43b-809d-4d53-8950-e87873e687fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-5h9fm_calico-system(8bf2c43b-809d-4d53-8950-e87873e687fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a64a84b9f60da2cf92e630a8c329b0b561964c10dc2c0a6e76be6030c084e7c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-5h9fm" podUID="8bf2c43b-809d-4d53-8950-e87873e687fe" Apr 17 03:03:44.036190 containerd[1572]: time="2026-04-17T03:03:44.036117980Z" level=info msg="StartContainer for \"66cadbabcc8a64ed53c0a1f00b265511985286ff91d36183e8403f28bcadf52d\" returns successfully" Apr 17 03:03:44.045585 containerd[1572]: time="2026-04-17T03:03:44.045540538Z" level=error msg="Failed to destroy network for sandbox \"951559656003d5f44c5ccbe9c3a99b5fbaec75245df7ad9a6c5c3a90c12f31a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 03:03:44.047715 systemd[1]: run-netns-cni\x2dc563db8e\x2d1b5c\x2d4864\x2d56b9\x2dbad6209d76fb.mount: Deactivated successfully. Apr 17 03:03:44.057798 containerd[1572]: time="2026-04-17T03:03:44.048941375Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c57849b64-mlx2h,Uid:e24d71c5-7063-40d8-9b55-b8ca8f5e8578,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"951559656003d5f44c5ccbe9c3a99b5fbaec75245df7ad9a6c5c3a90c12f31a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 03:03:44.058149 containerd[1572]: time="2026-04-17T03:03:44.050268882Z" level=error msg="Failed to destroy network for sandbox \"972fedc9c49f761bc6ff39764931c9e10a9481d4caa8e5f77bb83cc3234dffed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 03:03:44.058535 containerd[1572]: time="2026-04-17T03:03:44.057700010Z" level=error msg="Failed to destroy network for sandbox \"dbc065b401f82ff5766906978daf46eeb7d622e00070df52f354a0aa869d1ad6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 03:03:44.058638 kubelet[2725]: E0417 03:03:44.058580 2725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"951559656003d5f44c5ccbe9c3a99b5fbaec75245df7ad9a6c5c3a90c12f31a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 03:03:44.058701 kubelet[2725]: E0417 03:03:44.058650 2725 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"951559656003d5f44c5ccbe9c3a99b5fbaec75245df7ad9a6c5c3a90c12f31a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c57849b64-mlx2h" Apr 17 03:03:44.058701 kubelet[2725]: E0417 03:03:44.058668 2725 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"951559656003d5f44c5ccbe9c3a99b5fbaec75245df7ad9a6c5c3a90c12f31a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c57849b64-mlx2h" Apr 17 03:03:44.058784 kubelet[2725]: E0417 03:03:44.058721 2725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c57849b64-mlx2h_calico-system(e24d71c5-7063-40d8-9b55-b8ca8f5e8578)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c57849b64-mlx2h_calico-system(e24d71c5-7063-40d8-9b55-b8ca8f5e8578)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"951559656003d5f44c5ccbe9c3a99b5fbaec75245df7ad9a6c5c3a90c12f31a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c57849b64-mlx2h" podUID="e24d71c5-7063-40d8-9b55-b8ca8f5e8578" Apr 17 03:03:44.060549 systemd[1]: run-netns-cni\x2d8f6285b2\x2d5159\x2d03f9\x2dc3fd\x2d1b02b847c8f2.mount: Deactivated successfully. Apr 17 03:03:44.060684 systemd[1]: run-netns-cni\x2ddded7563\x2df469\x2d69a5\x2d3a69\x2d73c5ced5466f.mount: Deactivated successfully. Apr 17 03:03:44.061194 containerd[1572]: time="2026-04-17T03:03:44.061130196Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d76c7b76-rl429,Uid:a023fec7-9b08-45e6-b187-85e88df49048,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"972fedc9c49f761bc6ff39764931c9e10a9481d4caa8e5f77bb83cc3234dffed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 03:03:44.061765 kubelet[2725]: E0417 03:03:44.061693 2725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"972fedc9c49f761bc6ff39764931c9e10a9481d4caa8e5f77bb83cc3234dffed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 03:03:44.061888 kubelet[2725]: E0417 03:03:44.061811 2725 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"972fedc9c49f761bc6ff39764931c9e10a9481d4caa8e5f77bb83cc3234dffed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-57d76c7b76-rl429" Apr 17 03:03:44.061888 kubelet[2725]: E0417 03:03:44.061842 2725 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"972fedc9c49f761bc6ff39764931c9e10a9481d4caa8e5f77bb83cc3234dffed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-57d76c7b76-rl429" Apr 17 03:03:44.062365 containerd[1572]: time="2026-04-17T03:03:44.062320763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pnqxl,Uid:c2693bff-f36f-4e79-8b27-a87e39664a97,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbc065b401f82ff5766906978daf46eeb7d622e00070df52f354a0aa869d1ad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 03:03:44.062829 kubelet[2725]: E0417 03:03:44.062286 2725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57d76c7b76-rl429_calico-system(a023fec7-9b08-45e6-b187-85e88df49048)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57d76c7b76-rl429_calico-system(a023fec7-9b08-45e6-b187-85e88df49048)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"972fedc9c49f761bc6ff39764931c9e10a9481d4caa8e5f77bb83cc3234dffed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-57d76c7b76-rl429" podUID="a023fec7-9b08-45e6-b187-85e88df49048" Apr 17 03:03:44.062829 kubelet[2725]: E0417 03:03:44.062520 2725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbc065b401f82ff5766906978daf46eeb7d622e00070df52f354a0aa869d1ad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 03:03:44.062829 kubelet[2725]: E0417 03:03:44.062556 2725 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbc065b401f82ff5766906978daf46eeb7d622e00070df52f354a0aa869d1ad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-pnqxl" Apr 17 03:03:44.062996 kubelet[2725]: E0417 03:03:44.062619 2725 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbc065b401f82ff5766906978daf46eeb7d622e00070df52f354a0aa869d1ad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-pnqxl" Apr 17 03:03:44.062996 kubelet[2725]: E0417 03:03:44.062934 2725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-pnqxl_kube-system(c2693bff-f36f-4e79-8b27-a87e39664a97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-pnqxl_kube-system(c2693bff-f36f-4e79-8b27-a87e39664a97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dbc065b401f82ff5766906978daf46eeb7d622e00070df52f354a0aa869d1ad6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-pnqxl" podUID="c2693bff-f36f-4e79-8b27-a87e39664a97" Apr 17 03:03:44.074899 containerd[1572]: time="2026-04-17T03:03:44.074802997Z" level=error msg="Failed to destroy network for sandbox \"f9883890b5669f326534f56e017038c15c1abf0fa0a958588f5f655190e3b8e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 03:03:44.076623 containerd[1572]: time="2026-04-17T03:03:44.076551982Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bfd4644d7-dftk2,Uid:ec24544f-dd99-4880-8240-7915f1266d12,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9883890b5669f326534f56e017038c15c1abf0fa0a958588f5f655190e3b8e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 03:03:44.077053 kubelet[2725]: E0417 03:03:44.076853 2725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9883890b5669f326534f56e017038c15c1abf0fa0a958588f5f655190e3b8e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 03:03:44.077053 kubelet[2725]: E0417 03:03:44.076898 2725 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9883890b5669f326534f56e017038c15c1abf0fa0a958588f5f655190e3b8e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bfd4644d7-dftk2" Apr 17 03:03:44.077053 kubelet[2725]: E0417 03:03:44.076950 2725 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9883890b5669f326534f56e017038c15c1abf0fa0a958588f5f655190e3b8e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bfd4644d7-dftk2" Apr 17 03:03:44.077288 kubelet[2725]: E0417 03:03:44.077008 2725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-bfd4644d7-dftk2_calico-system(ec24544f-dd99-4880-8240-7915f1266d12)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-bfd4644d7-dftk2_calico-system(ec24544f-dd99-4880-8240-7915f1266d12)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9883890b5669f326534f56e017038c15c1abf0fa0a958588f5f655190e3b8e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-bfd4644d7-dftk2" podUID="ec24544f-dd99-4880-8240-7915f1266d12" Apr 17 03:03:44.083019 containerd[1572]: time="2026-04-17T03:03:44.082982817Z" level=error msg="Failed to destroy network for sandbox \"d931533068585125ba930331a0b03df9db0db2694031a24c262ab0c0dabc8c31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 03:03:44.084190 containerd[1572]: time="2026-04-17T03:03:44.084143619Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d76c7b76-gmmgs,Uid:b8e9572e-cd21-4783-81eb-03cb12ebcc87,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d931533068585125ba930331a0b03df9db0db2694031a24c262ab0c0dabc8c31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 03:03:44.084455 kubelet[2725]: E0417 03:03:44.084420 2725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d931533068585125ba930331a0b03df9db0db2694031a24c262ab0c0dabc8c31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 03:03:44.084545 kubelet[2725]: E0417 03:03:44.084473 2725 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d931533068585125ba930331a0b03df9db0db2694031a24c262ab0c0dabc8c31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-57d76c7b76-gmmgs" Apr 17 03:03:44.084545 kubelet[2725]: E0417 03:03:44.084496 2725 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d931533068585125ba930331a0b03df9db0db2694031a24c262ab0c0dabc8c31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-57d76c7b76-gmmgs" Apr 17 03:03:44.084588 kubelet[2725]: E0417 03:03:44.084553 2725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57d76c7b76-gmmgs_calico-system(b8e9572e-cd21-4783-81eb-03cb12ebcc87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57d76c7b76-gmmgs_calico-system(b8e9572e-cd21-4783-81eb-03cb12ebcc87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d931533068585125ba930331a0b03df9db0db2694031a24c262ab0c0dabc8c31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-57d76c7b76-gmmgs" podUID="b8e9572e-cd21-4783-81eb-03cb12ebcc87" Apr 17 03:03:44.752982 systemd[1]: Created slice kubepods-besteffort-pod484b28cf_729f_4fde_afb9_2d7a3393fea4.slice - libcontainer container kubepods-besteffort-pod484b28cf_729f_4fde_afb9_2d7a3393fea4.slice. Apr 17 03:03:44.757732 containerd[1572]: time="2026-04-17T03:03:44.757689667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5pbx6,Uid:484b28cf-729f-4fde-afb9-2d7a3393fea4,Namespace:calico-system,Attempt:0,}" Apr 17 03:03:44.861873 kubelet[2725]: I0417 03:03:44.861167 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bq56h" podStartSLOduration=3.314468679 podStartE2EDuration="16.861153924s" podCreationTimestamp="2026-04-17 03:03:28 +0000 UTC" firstStartedPulling="2026-04-17 03:03:29.389478463 +0000 UTC m=+16.711496396" lastFinishedPulling="2026-04-17 03:03:42.936163709 +0000 UTC m=+30.258181641" observedRunningTime="2026-04-17 03:03:44.860324342 +0000 UTC m=+32.182342288" watchObservedRunningTime="2026-04-17 03:03:44.861153924 +0000 UTC m=+32.183171866" Apr 17 03:03:44.873324 systemd-networkd[1483]: cali94f8d6c18fd: Link UP Apr 17 03:03:44.874062 systemd-networkd[1483]: cali94f8d6c18fd: Gained carrier Apr 17 03:03:44.886999 containerd[1572]: 2026-04-17 03:03:44.779 [ERROR][3807] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 03:03:44.886999 containerd[1572]: 2026-04-17 03:03:44.798 [INFO][3807] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--5pbx6-eth0 csi-node-driver- calico-system 484b28cf-729f-4fde-afb9-2d7a3393fea4 695 0 2026-04-17 03:03:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-5pbx6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali94f8d6c18fd [] [] }} ContainerID="c143a404e453a289376916f432d6a78bfd8fb6e8b14887c49f22f8d57fd66053" Namespace="calico-system" Pod="csi-node-driver-5pbx6" WorkloadEndpoint="localhost-k8s-csi--node--driver--5pbx6-" Apr 17 03:03:44.886999 containerd[1572]: 2026-04-17 03:03:44.798 [INFO][3807] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c143a404e453a289376916f432d6a78bfd8fb6e8b14887c49f22f8d57fd66053" Namespace="calico-system" Pod="csi-node-driver-5pbx6" WorkloadEndpoint="localhost-k8s-csi--node--driver--5pbx6-eth0" Apr 17 03:03:44.886999 containerd[1572]: 2026-04-17 03:03:44.826 [INFO][3823] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c143a404e453a289376916f432d6a78bfd8fb6e8b14887c49f22f8d57fd66053" HandleID="k8s-pod-network.c143a404e453a289376916f432d6a78bfd8fb6e8b14887c49f22f8d57fd66053" Workload="localhost-k8s-csi--node--driver--5pbx6-eth0" Apr 17 03:03:44.887464 containerd[1572]: 2026-04-17 03:03:44.832 [INFO][3823] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c143a404e453a289376916f432d6a78bfd8fb6e8b14887c49f22f8d57fd66053" HandleID="k8s-pod-network.c143a404e453a289376916f432d6a78bfd8fb6e8b14887c49f22f8d57fd66053" Workload="localhost-k8s-csi--node--driver--5pbx6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b1d00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-5pbx6", "timestamp":"2026-04-17 03:03:44.826048092 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0006deb00)} Apr 17 03:03:44.887464 containerd[1572]: 2026-04-17 03:03:44.832 [INFO][3823] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 03:03:44.887464 containerd[1572]: 2026-04-17 03:03:44.832 [INFO][3823] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 03:03:44.887464 containerd[1572]: 2026-04-17 03:03:44.832 [INFO][3823] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 03:03:44.887464 containerd[1572]: 2026-04-17 03:03:44.835 [INFO][3823] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c143a404e453a289376916f432d6a78bfd8fb6e8b14887c49f22f8d57fd66053" host="localhost" Apr 17 03:03:44.887464 containerd[1572]: 2026-04-17 03:03:44.839 [INFO][3823] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 03:03:44.887464 containerd[1572]: 2026-04-17 03:03:44.843 [INFO][3823] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 03:03:44.887464 containerd[1572]: 2026-04-17 03:03:44.845 [INFO][3823] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 03:03:44.887464 containerd[1572]: 2026-04-17 03:03:44.847 [INFO][3823] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 03:03:44.887464 containerd[1572]: 2026-04-17 03:03:44.847 [INFO][3823] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c143a404e453a289376916f432d6a78bfd8fb6e8b14887c49f22f8d57fd66053" host="localhost" Apr 17 03:03:44.887646 containerd[1572]: 2026-04-17 03:03:44.848 [INFO][3823] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c143a404e453a289376916f432d6a78bfd8fb6e8b14887c49f22f8d57fd66053 Apr 17 03:03:44.887646 containerd[1572]: 2026-04-17 03:03:44.854 [INFO][3823] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c143a404e453a289376916f432d6a78bfd8fb6e8b14887c49f22f8d57fd66053" host="localhost" Apr 17 03:03:44.887646 containerd[1572]: 2026-04-17 03:03:44.859 [INFO][3823] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c143a404e453a289376916f432d6a78bfd8fb6e8b14887c49f22f8d57fd66053" host="localhost" Apr 17 03:03:44.887646 containerd[1572]: 2026-04-17 03:03:44.859 [INFO][3823] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c143a404e453a289376916f432d6a78bfd8fb6e8b14887c49f22f8d57fd66053" host="localhost" Apr 17 03:03:44.887646 containerd[1572]: 2026-04-17 03:03:44.859 [INFO][3823] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 03:03:44.887646 containerd[1572]: 2026-04-17 03:03:44.859 [INFO][3823] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c143a404e453a289376916f432d6a78bfd8fb6e8b14887c49f22f8d57fd66053" HandleID="k8s-pod-network.c143a404e453a289376916f432d6a78bfd8fb6e8b14887c49f22f8d57fd66053" Workload="localhost-k8s-csi--node--driver--5pbx6-eth0" Apr 17 03:03:44.887733 containerd[1572]: 2026-04-17 03:03:44.865 [INFO][3807] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c143a404e453a289376916f432d6a78bfd8fb6e8b14887c49f22f8d57fd66053" Namespace="calico-system" Pod="csi-node-driver-5pbx6" WorkloadEndpoint="localhost-k8s-csi--node--driver--5pbx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5pbx6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"484b28cf-729f-4fde-afb9-2d7a3393fea4", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 3, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-5pbx6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali94f8d6c18fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 03:03:44.887795 containerd[1572]: 2026-04-17 03:03:44.865 [INFO][3807] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c143a404e453a289376916f432d6a78bfd8fb6e8b14887c49f22f8d57fd66053" Namespace="calico-system" Pod="csi-node-driver-5pbx6" WorkloadEndpoint="localhost-k8s-csi--node--driver--5pbx6-eth0" Apr 17 03:03:44.887795 containerd[1572]: 2026-04-17 03:03:44.865 [INFO][3807] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali94f8d6c18fd ContainerID="c143a404e453a289376916f432d6a78bfd8fb6e8b14887c49f22f8d57fd66053" Namespace="calico-system" Pod="csi-node-driver-5pbx6" WorkloadEndpoint="localhost-k8s-csi--node--driver--5pbx6-eth0" Apr 17 03:03:44.887795 containerd[1572]: 2026-04-17 03:03:44.875 [INFO][3807] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c143a404e453a289376916f432d6a78bfd8fb6e8b14887c49f22f8d57fd66053" Namespace="calico-system" Pod="csi-node-driver-5pbx6" WorkloadEndpoint="localhost-k8s-csi--node--driver--5pbx6-eth0" Apr 17 03:03:44.887838 containerd[1572]: 2026-04-17 03:03:44.875 [INFO][3807] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c143a404e453a289376916f432d6a78bfd8fb6e8b14887c49f22f8d57fd66053" Namespace="calico-system" Pod="csi-node-driver-5pbx6" WorkloadEndpoint="localhost-k8s-csi--node--driver--5pbx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5pbx6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"484b28cf-729f-4fde-afb9-2d7a3393fea4", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 3, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c143a404e453a289376916f432d6a78bfd8fb6e8b14887c49f22f8d57fd66053", Pod:"csi-node-driver-5pbx6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali94f8d6c18fd", MAC:"0e:da:ae:fb:aa:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 03:03:44.887884 containerd[1572]: 2026-04-17 03:03:44.882 [INFO][3807] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c143a404e453a289376916f432d6a78bfd8fb6e8b14887c49f22f8d57fd66053" Namespace="calico-system" Pod="csi-node-driver-5pbx6" WorkloadEndpoint="localhost-k8s-csi--node--driver--5pbx6-eth0" Apr 17 03:03:44.905134 containerd[1572]: time="2026-04-17T03:03:44.905065740Z" level=info msg="connecting to shim c143a404e453a289376916f432d6a78bfd8fb6e8b14887c49f22f8d57fd66053" address="unix:///run/containerd/s/d7359b98fff85c9618884176ed43bcfcef6deab1c464b0873b98e79d20082da0" namespace=k8s.io protocol=ttrpc version=3 Apr 17 03:03:44.926152 systemd[1]: Started cri-containerd-c143a404e453a289376916f432d6a78bfd8fb6e8b14887c49f22f8d57fd66053.scope - libcontainer container c143a404e453a289376916f432d6a78bfd8fb6e8b14887c49f22f8d57fd66053. Apr 17 03:03:44.934753 systemd-resolved[1486]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 03:03:44.949595 systemd[1]: run-netns-cni\x2dcb1485a5\x2d7704\x2d903d\x2dc0d8\x2d9f86a1b88b7a.mount: Deactivated successfully. Apr 17 03:03:44.949664 systemd[1]: run-netns-cni\x2d6f8e1453\x2d0134\x2de94e\x2d2272\x2d6ee8fc6ed5f9.mount: Deactivated successfully. Apr 17 03:03:44.951168 containerd[1572]: time="2026-04-17T03:03:44.951107835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5pbx6,Uid:484b28cf-729f-4fde-afb9-2d7a3393fea4,Namespace:calico-system,Attempt:0,} returns sandbox id \"c143a404e453a289376916f432d6a78bfd8fb6e8b14887c49f22f8d57fd66053\"" Apr 17 03:03:44.952851 containerd[1572]: time="2026-04-17T03:03:44.952820104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 17 03:03:44.986094 kubelet[2725]: I0417 03:03:44.986035 2725 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ec24544f-dd99-4880-8240-7915f1266d12-whisker-backend-key-pair\") pod \"ec24544f-dd99-4880-8240-7915f1266d12\" (UID: \"ec24544f-dd99-4880-8240-7915f1266d12\") " Apr 17 03:03:44.986094 kubelet[2725]: I0417 03:03:44.986084 2725 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ec24544f-dd99-4880-8240-7915f1266d12-nginx-config\") pod \"ec24544f-dd99-4880-8240-7915f1266d12\" (UID: \"ec24544f-dd99-4880-8240-7915f1266d12\") " Apr 17 03:03:44.986263 kubelet[2725]: I0417 03:03:44.986150 2725 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec24544f-dd99-4880-8240-7915f1266d12-whisker-ca-bundle\") pod \"ec24544f-dd99-4880-8240-7915f1266d12\" (UID: \"ec24544f-dd99-4880-8240-7915f1266d12\") " Apr 17 03:03:44.986263 kubelet[2725]: I0417 03:03:44.986173 2725 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlmdq\" (UniqueName: \"kubernetes.io/projected/ec24544f-dd99-4880-8240-7915f1266d12-kube-api-access-wlmdq\") pod \"ec24544f-dd99-4880-8240-7915f1266d12\" (UID: \"ec24544f-dd99-4880-8240-7915f1266d12\") " Apr 17 03:03:44.986760 kubelet[2725]: I0417 03:03:44.986737 2725 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec24544f-dd99-4880-8240-7915f1266d12-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ec24544f-dd99-4880-8240-7915f1266d12" (UID: "ec24544f-dd99-4880-8240-7915f1266d12"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 03:03:44.987358 kubelet[2725]: I0417 03:03:44.987305 2725 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec24544f-dd99-4880-8240-7915f1266d12-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "ec24544f-dd99-4880-8240-7915f1266d12" (UID: "ec24544f-dd99-4880-8240-7915f1266d12"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 03:03:44.989867 kubelet[2725]: I0417 03:03:44.989838 2725 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec24544f-dd99-4880-8240-7915f1266d12-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ec24544f-dd99-4880-8240-7915f1266d12" (UID: "ec24544f-dd99-4880-8240-7915f1266d12"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 03:03:44.990449 kubelet[2725]: I0417 03:03:44.990415 2725 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec24544f-dd99-4880-8240-7915f1266d12-kube-api-access-wlmdq" (OuterVolumeSpecName: "kube-api-access-wlmdq") pod "ec24544f-dd99-4880-8240-7915f1266d12" (UID: "ec24544f-dd99-4880-8240-7915f1266d12"). InnerVolumeSpecName "kube-api-access-wlmdq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 03:03:44.990833 systemd[1]: var-lib-kubelet-pods-ec24544f\x2ddd99\x2d4880\x2d8240\x2d7915f1266d12-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwlmdq.mount: Deactivated successfully. Apr 17 03:03:44.990957 systemd[1]: var-lib-kubelet-pods-ec24544f\x2ddd99\x2d4880\x2d8240\x2d7915f1266d12-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 17 03:03:45.087566 kubelet[2725]: I0417 03:03:45.087371 2725 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec24544f-dd99-4880-8240-7915f1266d12-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 17 03:03:45.087566 kubelet[2725]: I0417 03:03:45.087424 2725 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wlmdq\" (UniqueName: \"kubernetes.io/projected/ec24544f-dd99-4880-8240-7915f1266d12-kube-api-access-wlmdq\") on node \"localhost\" DevicePath \"\"" Apr 17 03:03:45.087566 kubelet[2725]: I0417 03:03:45.087434 2725 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ec24544f-dd99-4880-8240-7915f1266d12-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 17 03:03:45.087566 kubelet[2725]: I0417 03:03:45.087440 2725 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ec24544f-dd99-4880-8240-7915f1266d12-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 17 03:03:45.850168 kubelet[2725]: I0417 03:03:45.850103 2725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 03:03:45.854776 systemd[1]: Removed slice kubepods-besteffort-podec24544f_dd99_4880_8240_7915f1266d12.slice - libcontainer container kubepods-besteffort-podec24544f_dd99_4880_8240_7915f1266d12.slice. Apr 17 03:03:45.906157 systemd[1]: Created slice kubepods-besteffort-pod3c246281_0419_4338_98f1_8d337ba7c28d.slice - libcontainer container kubepods-besteffort-pod3c246281_0419_4338_98f1_8d337ba7c28d.slice. Apr 17 03:03:45.994834 kubelet[2725]: I0417 03:03:45.994724 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crqg6\" (UniqueName: \"kubernetes.io/projected/3c246281-0419-4338-98f1-8d337ba7c28d-kube-api-access-crqg6\") pod \"whisker-c5d8d6dd-srnj5\" (UID: \"3c246281-0419-4338-98f1-8d337ba7c28d\") " pod="calico-system/whisker-c5d8d6dd-srnj5" Apr 17 03:03:45.994834 kubelet[2725]: I0417 03:03:45.994802 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3c246281-0419-4338-98f1-8d337ba7c28d-whisker-backend-key-pair\") pod \"whisker-c5d8d6dd-srnj5\" (UID: \"3c246281-0419-4338-98f1-8d337ba7c28d\") " pod="calico-system/whisker-c5d8d6dd-srnj5" Apr 17 03:03:45.995697 kubelet[2725]: I0417 03:03:45.994992 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3c246281-0419-4338-98f1-8d337ba7c28d-nginx-config\") pod \"whisker-c5d8d6dd-srnj5\" (UID: \"3c246281-0419-4338-98f1-8d337ba7c28d\") " pod="calico-system/whisker-c5d8d6dd-srnj5" Apr 17 03:03:45.995697 kubelet[2725]: I0417 03:03:45.995235 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c246281-0419-4338-98f1-8d337ba7c28d-whisker-ca-bundle\") pod \"whisker-c5d8d6dd-srnj5\" (UID: \"3c246281-0419-4338-98f1-8d337ba7c28d\") " pod="calico-system/whisker-c5d8d6dd-srnj5" Apr 17 03:03:46.212159 containerd[1572]: time="2026-04-17T03:03:46.212108874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c5d8d6dd-srnj5,Uid:3c246281-0419-4338-98f1-8d337ba7c28d,Namespace:calico-system,Attempt:0,}" Apr 17 03:03:46.283779 systemd-networkd[1483]: cali94f8d6c18fd: Gained IPv6LL Apr 17 03:03:46.334351 systemd-networkd[1483]: cali26205a612fe: Link UP Apr 17 03:03:46.334654 systemd-networkd[1483]: cali26205a612fe: Gained carrier Apr 17 03:03:46.346694 containerd[1572]: 2026-04-17 03:03:46.239 [ERROR][3996] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 03:03:46.346694 containerd[1572]: 2026-04-17 03:03:46.258 [INFO][3996] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--c5d8d6dd--srnj5-eth0 whisker-c5d8d6dd- calico-system 3c246281-0419-4338-98f1-8d337ba7c28d 891 0 2026-04-17 03:03:45 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:c5d8d6dd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-c5d8d6dd-srnj5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali26205a612fe [] [] }} ContainerID="28f061bac44cbf7e0e3e3ea7a2e75cade38db45400f8a1fdc3ac30ef06e54393" Namespace="calico-system" Pod="whisker-c5d8d6dd-srnj5" WorkloadEndpoint="localhost-k8s-whisker--c5d8d6dd--srnj5-" Apr 17 03:03:46.346694 containerd[1572]: 2026-04-17 03:03:46.258 [INFO][3996] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28f061bac44cbf7e0e3e3ea7a2e75cade38db45400f8a1fdc3ac30ef06e54393" Namespace="calico-system" Pod="whisker-c5d8d6dd-srnj5" WorkloadEndpoint="localhost-k8s-whisker--c5d8d6dd--srnj5-eth0" Apr 17 03:03:46.346694 containerd[1572]: 2026-04-17 03:03:46.286 [INFO][4010] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28f061bac44cbf7e0e3e3ea7a2e75cade38db45400f8a1fdc3ac30ef06e54393" HandleID="k8s-pod-network.28f061bac44cbf7e0e3e3ea7a2e75cade38db45400f8a1fdc3ac30ef06e54393" Workload="localhost-k8s-whisker--c5d8d6dd--srnj5-eth0" Apr 17 03:03:46.346983 containerd[1572]: 2026-04-17 03:03:46.294 [INFO][4010] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="28f061bac44cbf7e0e3e3ea7a2e75cade38db45400f8a1fdc3ac30ef06e54393" HandleID="k8s-pod-network.28f061bac44cbf7e0e3e3ea7a2e75cade38db45400f8a1fdc3ac30ef06e54393" Workload="localhost-k8s-whisker--c5d8d6dd--srnj5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ee7e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-c5d8d6dd-srnj5", "timestamp":"2026-04-17 03:03:46.286888647 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00063a420)} Apr 17 03:03:46.346983 containerd[1572]: 2026-04-17 03:03:46.294 [INFO][4010] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 03:03:46.346983 containerd[1572]: 2026-04-17 03:03:46.294 [INFO][4010] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 03:03:46.346983 containerd[1572]: 2026-04-17 03:03:46.294 [INFO][4010] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 03:03:46.346983 containerd[1572]: 2026-04-17 03:03:46.296 [INFO][4010] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.28f061bac44cbf7e0e3e3ea7a2e75cade38db45400f8a1fdc3ac30ef06e54393" host="localhost" Apr 17 03:03:46.346983 containerd[1572]: 2026-04-17 03:03:46.300 [INFO][4010] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 03:03:46.346983 containerd[1572]: 2026-04-17 03:03:46.306 [INFO][4010] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 03:03:46.346983 containerd[1572]: 2026-04-17 03:03:46.308 [INFO][4010] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 03:03:46.346983 containerd[1572]: 2026-04-17 03:03:46.311 [INFO][4010] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 03:03:46.346983 containerd[1572]: 2026-04-17 03:03:46.311 [INFO][4010] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.28f061bac44cbf7e0e3e3ea7a2e75cade38db45400f8a1fdc3ac30ef06e54393" host="localhost" Apr 17 03:03:46.347162 containerd[1572]: 2026-04-17 03:03:46.313 [INFO][4010] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.28f061bac44cbf7e0e3e3ea7a2e75cade38db45400f8a1fdc3ac30ef06e54393 Apr 17 03:03:46.347162 containerd[1572]: 2026-04-17 03:03:46.321 [INFO][4010] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.28f061bac44cbf7e0e3e3ea7a2e75cade38db45400f8a1fdc3ac30ef06e54393" host="localhost" Apr 17 03:03:46.347162 containerd[1572]: 2026-04-17 03:03:46.330 [INFO][4010] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.28f061bac44cbf7e0e3e3ea7a2e75cade38db45400f8a1fdc3ac30ef06e54393" host="localhost" Apr 17 03:03:46.347162 containerd[1572]: 2026-04-17 03:03:46.330 [INFO][4010] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.28f061bac44cbf7e0e3e3ea7a2e75cade38db45400f8a1fdc3ac30ef06e54393" host="localhost" Apr 17 03:03:46.347162 containerd[1572]: 2026-04-17 03:03:46.330 [INFO][4010] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 03:03:46.347162 containerd[1572]: 2026-04-17 03:03:46.330 [INFO][4010] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="28f061bac44cbf7e0e3e3ea7a2e75cade38db45400f8a1fdc3ac30ef06e54393" HandleID="k8s-pod-network.28f061bac44cbf7e0e3e3ea7a2e75cade38db45400f8a1fdc3ac30ef06e54393" Workload="localhost-k8s-whisker--c5d8d6dd--srnj5-eth0" Apr 17 03:03:46.347251 containerd[1572]: 2026-04-17 03:03:46.332 [INFO][3996] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28f061bac44cbf7e0e3e3ea7a2e75cade38db45400f8a1fdc3ac30ef06e54393" Namespace="calico-system" Pod="whisker-c5d8d6dd-srnj5" WorkloadEndpoint="localhost-k8s-whisker--c5d8d6dd--srnj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--c5d8d6dd--srnj5-eth0", GenerateName:"whisker-c5d8d6dd-", Namespace:"calico-system", SelfLink:"", UID:"3c246281-0419-4338-98f1-8d337ba7c28d", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 3, 3, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c5d8d6dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-c5d8d6dd-srnj5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali26205a612fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 03:03:46.347251 containerd[1572]: 2026-04-17 03:03:46.332 [INFO][3996] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="28f061bac44cbf7e0e3e3ea7a2e75cade38db45400f8a1fdc3ac30ef06e54393" Namespace="calico-system" Pod="whisker-c5d8d6dd-srnj5" WorkloadEndpoint="localhost-k8s-whisker--c5d8d6dd--srnj5-eth0" Apr 17 03:03:46.347319 containerd[1572]: 2026-04-17 03:03:46.332 [INFO][3996] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26205a612fe ContainerID="28f061bac44cbf7e0e3e3ea7a2e75cade38db45400f8a1fdc3ac30ef06e54393" Namespace="calico-system" Pod="whisker-c5d8d6dd-srnj5" WorkloadEndpoint="localhost-k8s-whisker--c5d8d6dd--srnj5-eth0" Apr 17 03:03:46.347319 containerd[1572]: 2026-04-17 03:03:46.334 [INFO][3996] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28f061bac44cbf7e0e3e3ea7a2e75cade38db45400f8a1fdc3ac30ef06e54393" Namespace="calico-system" Pod="whisker-c5d8d6dd-srnj5" WorkloadEndpoint="localhost-k8s-whisker--c5d8d6dd--srnj5-eth0" Apr 17 03:03:46.347350 containerd[1572]: 2026-04-17 03:03:46.336 [INFO][3996] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28f061bac44cbf7e0e3e3ea7a2e75cade38db45400f8a1fdc3ac30ef06e54393" Namespace="calico-system" Pod="whisker-c5d8d6dd-srnj5" WorkloadEndpoint="localhost-k8s-whisker--c5d8d6dd--srnj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--c5d8d6dd--srnj5-eth0", GenerateName:"whisker-c5d8d6dd-", Namespace:"calico-system", SelfLink:"", UID:"3c246281-0419-4338-98f1-8d337ba7c28d", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 3, 3, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c5d8d6dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28f061bac44cbf7e0e3e3ea7a2e75cade38db45400f8a1fdc3ac30ef06e54393", Pod:"whisker-c5d8d6dd-srnj5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali26205a612fe", MAC:"ea:bf:97:78:86:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 03:03:46.347398 containerd[1572]: 2026-04-17 03:03:46.344 [INFO][3996] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28f061bac44cbf7e0e3e3ea7a2e75cade38db45400f8a1fdc3ac30ef06e54393" Namespace="calico-system" Pod="whisker-c5d8d6dd-srnj5" WorkloadEndpoint="localhost-k8s-whisker--c5d8d6dd--srnj5-eth0" Apr 17 03:03:46.381578 containerd[1572]: time="2026-04-17T03:03:46.381485314Z" level=info msg="connecting to shim 28f061bac44cbf7e0e3e3ea7a2e75cade38db45400f8a1fdc3ac30ef06e54393" address="unix:///run/containerd/s/0ec218e694739da0520ca341704f3a02391f37d8e25ceb4b863d0946c32a4b9f" namespace=k8s.io protocol=ttrpc version=3 Apr 17 03:03:46.406273 systemd[1]: Started cri-containerd-28f061bac44cbf7e0e3e3ea7a2e75cade38db45400f8a1fdc3ac30ef06e54393.scope - libcontainer container 28f061bac44cbf7e0e3e3ea7a2e75cade38db45400f8a1fdc3ac30ef06e54393. Apr 17 03:03:46.421259 systemd-resolved[1486]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 03:03:46.450504 containerd[1572]: time="2026-04-17T03:03:46.449114732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:46.450504 containerd[1572]: time="2026-04-17T03:03:46.449621337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 17 03:03:46.451056 containerd[1572]: time="2026-04-17T03:03:46.451025427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c5d8d6dd-srnj5,Uid:3c246281-0419-4338-98f1-8d337ba7c28d,Namespace:calico-system,Attempt:0,} returns sandbox id \"28f061bac44cbf7e0e3e3ea7a2e75cade38db45400f8a1fdc3ac30ef06e54393\"" Apr 17 03:03:46.451795 containerd[1572]: time="2026-04-17T03:03:46.451752396Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:46.453621 containerd[1572]: time="2026-04-17T03:03:46.453596492Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:46.454391 containerd[1572]: time="2026-04-17T03:03:46.454355522Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.50151047s" Apr 17 03:03:46.454432 containerd[1572]: time="2026-04-17T03:03:46.454399235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 17 03:03:46.456286 containerd[1572]: time="2026-04-17T03:03:46.456249897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 17 03:03:46.459740 containerd[1572]: time="2026-04-17T03:03:46.459706630Z" level=info msg="CreateContainer within sandbox \"c143a404e453a289376916f432d6a78bfd8fb6e8b14887c49f22f8d57fd66053\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 17 03:03:46.472749 containerd[1572]: time="2026-04-17T03:03:46.472652741Z" level=info msg="Container 6a7d834cd5e9ee0ac9100cd6373e53ae1e97692fc0c833e0b7a40230e0391e31: CDI devices from CRI Config.CDIDevices: []" Apr 17 03:03:46.480386 containerd[1572]: time="2026-04-17T03:03:46.480334458Z" level=info msg="CreateContainer within sandbox \"c143a404e453a289376916f432d6a78bfd8fb6e8b14887c49f22f8d57fd66053\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6a7d834cd5e9ee0ac9100cd6373e53ae1e97692fc0c833e0b7a40230e0391e31\"" Apr 17 03:03:46.480888 containerd[1572]: time="2026-04-17T03:03:46.480873379Z" level=info msg="StartContainer for \"6a7d834cd5e9ee0ac9100cd6373e53ae1e97692fc0c833e0b7a40230e0391e31\"" Apr 17 03:03:46.481979 containerd[1572]: time="2026-04-17T03:03:46.481955543Z" level=info msg="connecting to shim 6a7d834cd5e9ee0ac9100cd6373e53ae1e97692fc0c833e0b7a40230e0391e31" address="unix:///run/containerd/s/d7359b98fff85c9618884176ed43bcfcef6deab1c464b0873b98e79d20082da0" protocol=ttrpc version=3 Apr 17 03:03:46.501204 systemd[1]: Started cri-containerd-6a7d834cd5e9ee0ac9100cd6373e53ae1e97692fc0c833e0b7a40230e0391e31.scope - libcontainer container 6a7d834cd5e9ee0ac9100cd6373e53ae1e97692fc0c833e0b7a40230e0391e31. Apr 17 03:03:46.577546 containerd[1572]: time="2026-04-17T03:03:46.577425248Z" level=info msg="StartContainer for \"6a7d834cd5e9ee0ac9100cd6373e53ae1e97692fc0c833e0b7a40230e0391e31\" returns successfully" Apr 17 03:03:46.751596 kubelet[2725]: I0417 03:03:46.751355 2725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec24544f-dd99-4880-8240-7915f1266d12" path="/var/lib/kubelet/pods/ec24544f-dd99-4880-8240-7915f1266d12/volumes" Apr 17 03:03:47.499126 systemd-networkd[1483]: cali26205a612fe: Gained IPv6LL Apr 17 03:03:47.815212 containerd[1572]: time="2026-04-17T03:03:47.815076738Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:47.815855 containerd[1572]: time="2026-04-17T03:03:47.815812077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 17 03:03:47.816853 containerd[1572]: time="2026-04-17T03:03:47.816808352Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:47.819029 containerd[1572]: time="2026-04-17T03:03:47.818963330Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:47.819518 containerd[1572]: time="2026-04-17T03:03:47.819490095Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.363208766s" Apr 17 03:03:47.819550 containerd[1572]: time="2026-04-17T03:03:47.819522376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 17 03:03:47.820397 containerd[1572]: time="2026-04-17T03:03:47.820380407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 17 03:03:47.823999 containerd[1572]: time="2026-04-17T03:03:47.823896944Z" level=info msg="CreateContainer within sandbox \"28f061bac44cbf7e0e3e3ea7a2e75cade38db45400f8a1fdc3ac30ef06e54393\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 03:03:47.832290 containerd[1572]: time="2026-04-17T03:03:47.832253352Z" level=info msg="Container 650ed33d4ad6316b0d074f068ce4738e6d2c517331467a0d7ae7ab865e5a8de9: CDI devices from CRI Config.CDIDevices: []" Apr 17 03:03:47.839457 containerd[1572]: time="2026-04-17T03:03:47.839359698Z" level=info msg="CreateContainer within sandbox \"28f061bac44cbf7e0e3e3ea7a2e75cade38db45400f8a1fdc3ac30ef06e54393\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"650ed33d4ad6316b0d074f068ce4738e6d2c517331467a0d7ae7ab865e5a8de9\"" Apr 17 03:03:47.840010 containerd[1572]: time="2026-04-17T03:03:47.839953456Z" level=info msg="StartContainer for \"650ed33d4ad6316b0d074f068ce4738e6d2c517331467a0d7ae7ab865e5a8de9\"" Apr 17 03:03:47.841135 containerd[1572]: time="2026-04-17T03:03:47.841047724Z" level=info msg="connecting to shim 650ed33d4ad6316b0d074f068ce4738e6d2c517331467a0d7ae7ab865e5a8de9" address="unix:///run/containerd/s/0ec218e694739da0520ca341704f3a02391f37d8e25ceb4b863d0946c32a4b9f" protocol=ttrpc version=3 Apr 17 03:03:47.862075 systemd[1]: Started cri-containerd-650ed33d4ad6316b0d074f068ce4738e6d2c517331467a0d7ae7ab865e5a8de9.scope - libcontainer container 650ed33d4ad6316b0d074f068ce4738e6d2c517331467a0d7ae7ab865e5a8de9. Apr 17 03:03:47.904673 containerd[1572]: time="2026-04-17T03:03:47.904622122Z" level=info msg="StartContainer for \"650ed33d4ad6316b0d074f068ce4738e6d2c517331467a0d7ae7ab865e5a8de9\" returns successfully" Apr 17 03:03:49.260073 containerd[1572]: time="2026-04-17T03:03:49.260016106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:49.260710 containerd[1572]: time="2026-04-17T03:03:49.260675351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 17 03:03:49.261417 containerd[1572]: time="2026-04-17T03:03:49.261384942Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:49.263102 containerd[1572]: time="2026-04-17T03:03:49.263058316Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:49.263608 containerd[1572]: time="2026-04-17T03:03:49.263587313Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.443111091s" Apr 17 03:03:49.263640 containerd[1572]: time="2026-04-17T03:03:49.263613290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 17 03:03:49.264602 containerd[1572]: time="2026-04-17T03:03:49.264569936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 17 03:03:49.268893 containerd[1572]: time="2026-04-17T03:03:49.268770958Z" level=info msg="CreateContainer within sandbox \"c143a404e453a289376916f432d6a78bfd8fb6e8b14887c49f22f8d57fd66053\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 17 03:03:49.276806 containerd[1572]: time="2026-04-17T03:03:49.276752505Z" level=info msg="Container e1eeaaa27d2d68d974a262a82d0208de9c109a1b6b1fe39aee3c838a26ae0d33: CDI devices from CRI Config.CDIDevices: []" Apr 17 03:03:49.284273 containerd[1572]: time="2026-04-17T03:03:49.284215309Z" level=info msg="CreateContainer within sandbox \"c143a404e453a289376916f432d6a78bfd8fb6e8b14887c49f22f8d57fd66053\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e1eeaaa27d2d68d974a262a82d0208de9c109a1b6b1fe39aee3c838a26ae0d33\"" Apr 17 03:03:49.284943 containerd[1572]: time="2026-04-17T03:03:49.284860155Z" level=info msg="StartContainer for \"e1eeaaa27d2d68d974a262a82d0208de9c109a1b6b1fe39aee3c838a26ae0d33\"" Apr 17 03:03:49.286063 containerd[1572]: time="2026-04-17T03:03:49.286036752Z" level=info msg="connecting to shim e1eeaaa27d2d68d974a262a82d0208de9c109a1b6b1fe39aee3c838a26ae0d33" address="unix:///run/containerd/s/d7359b98fff85c9618884176ed43bcfcef6deab1c464b0873b98e79d20082da0" protocol=ttrpc version=3 Apr 17 03:03:49.303071 systemd[1]: Started cri-containerd-e1eeaaa27d2d68d974a262a82d0208de9c109a1b6b1fe39aee3c838a26ae0d33.scope - libcontainer container e1eeaaa27d2d68d974a262a82d0208de9c109a1b6b1fe39aee3c838a26ae0d33. Apr 17 03:03:49.360026 containerd[1572]: time="2026-04-17T03:03:49.359972488Z" level=info msg="StartContainer for \"e1eeaaa27d2d68d974a262a82d0208de9c109a1b6b1fe39aee3c838a26ae0d33\" returns successfully" Apr 17 03:03:49.788255 kubelet[2725]: I0417 03:03:49.788193 2725 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 17 03:03:49.789378 kubelet[2725]: I0417 03:03:49.789310 2725 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 17 03:03:49.888330 kubelet[2725]: I0417 03:03:49.888280 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5pbx6" podStartSLOduration=16.576348735 podStartE2EDuration="20.888264635s" podCreationTimestamp="2026-04-17 03:03:29 +0000 UTC" firstStartedPulling="2026-04-17 03:03:44.952560749 +0000 UTC m=+32.274578694" lastFinishedPulling="2026-04-17 03:03:49.264476657 +0000 UTC m=+36.586494594" observedRunningTime="2026-04-17 03:03:49.888154927 +0000 UTC m=+37.210172863" watchObservedRunningTime="2026-04-17 03:03:49.888264635 +0000 UTC m=+37.210282578" Apr 17 03:03:50.058555 systemd[1]: Started sshd@7-10.0.0.7:22-10.0.0.1:34774.service - OpenSSH per-connection server daemon (10.0.0.1:34774). Apr 17 03:03:50.123238 sshd[4276]: Accepted publickey for core from 10.0.0.1 port 34774 ssh2: RSA SHA256:FVrkeUr4F1DUvuGbghPLjRpHgCWbfVIbP6ixe+jkRU8 Apr 17 03:03:50.126339 sshd-session[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 03:03:50.130499 systemd-logind[1551]: New session 8 of user core. Apr 17 03:03:50.137071 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 03:03:50.209546 sshd[4280]: Connection closed by 10.0.0.1 port 34774 Apr 17 03:03:50.209811 sshd-session[4276]: pam_unix(sshd:session): session closed for user core Apr 17 03:03:50.212180 systemd[1]: sshd@7-10.0.0.7:22-10.0.0.1:34774.service: Deactivated successfully. Apr 17 03:03:50.213720 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 03:03:50.215654 systemd-logind[1551]: Session 8 logged out. Waiting for processes to exit. Apr 17 03:03:50.216456 systemd-logind[1551]: Removed session 8. Apr 17 03:03:50.790487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2396242181.mount: Deactivated successfully. Apr 17 03:03:50.820526 containerd[1572]: time="2026-04-17T03:03:50.820040721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:50.822071 containerd[1572]: time="2026-04-17T03:03:50.821979322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 17 03:03:50.823404 containerd[1572]: time="2026-04-17T03:03:50.823325235Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:50.825934 containerd[1572]: time="2026-04-17T03:03:50.825617172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:50.826008 containerd[1572]: time="2026-04-17T03:03:50.825976263Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.561373434s" Apr 17 03:03:50.826040 containerd[1572]: time="2026-04-17T03:03:50.826013380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 17 03:03:50.832410 containerd[1572]: time="2026-04-17T03:03:50.832367596Z" level=info msg="CreateContainer within sandbox \"28f061bac44cbf7e0e3e3ea7a2e75cade38db45400f8a1fdc3ac30ef06e54393\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 03:03:50.839668 containerd[1572]: time="2026-04-17T03:03:50.839630811Z" level=info msg="Container 1ac4300c40de2ee5ab764100c124c46827a2b1d30cf533ac2503926352f02e0c: CDI devices from CRI Config.CDIDevices: []" Apr 17 03:03:50.848530 containerd[1572]: time="2026-04-17T03:03:50.848426397Z" level=info msg="CreateContainer within sandbox \"28f061bac44cbf7e0e3e3ea7a2e75cade38db45400f8a1fdc3ac30ef06e54393\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1ac4300c40de2ee5ab764100c124c46827a2b1d30cf533ac2503926352f02e0c\"" Apr 17 03:03:50.849794 containerd[1572]: time="2026-04-17T03:03:50.849665371Z" level=info msg="StartContainer for \"1ac4300c40de2ee5ab764100c124c46827a2b1d30cf533ac2503926352f02e0c\"" Apr 17 03:03:50.851105 containerd[1572]: time="2026-04-17T03:03:50.851050684Z" level=info msg="connecting to shim 1ac4300c40de2ee5ab764100c124c46827a2b1d30cf533ac2503926352f02e0c" address="unix:///run/containerd/s/0ec218e694739da0520ca341704f3a02391f37d8e25ceb4b863d0946c32a4b9f" protocol=ttrpc version=3 Apr 17 03:03:50.878150 systemd[1]: Started cri-containerd-1ac4300c40de2ee5ab764100c124c46827a2b1d30cf533ac2503926352f02e0c.scope - libcontainer container 1ac4300c40de2ee5ab764100c124c46827a2b1d30cf533ac2503926352f02e0c. Apr 17 03:03:50.926576 containerd[1572]: time="2026-04-17T03:03:50.926539693Z" level=info msg="StartContainer for \"1ac4300c40de2ee5ab764100c124c46827a2b1d30cf533ac2503926352f02e0c\" returns successfully" Apr 17 03:03:54.752835 containerd[1572]: time="2026-04-17T03:03:54.752739742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c57849b64-mlx2h,Uid:e24d71c5-7063-40d8-9b55-b8ca8f5e8578,Namespace:calico-system,Attempt:0,}" Apr 17 03:03:54.755936 containerd[1572]: time="2026-04-17T03:03:54.755166445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d76c7b76-rl429,Uid:a023fec7-9b08-45e6-b187-85e88df49048,Namespace:calico-system,Attempt:0,}" Apr 17 03:03:54.862762 systemd-networkd[1483]: califcfc578af34: Link UP Apr 17 03:03:54.863079 systemd-networkd[1483]: califcfc578af34: Gained carrier Apr 17 03:03:54.871622 kubelet[2725]: I0417 03:03:54.871535 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-c5d8d6dd-srnj5" podStartSLOduration=5.497183737 podStartE2EDuration="9.871515731s" podCreationTimestamp="2026-04-17 03:03:45 +0000 UTC" firstStartedPulling="2026-04-17 03:03:46.452746344 +0000 UTC m=+33.774764280" lastFinishedPulling="2026-04-17 03:03:50.827078337 +0000 UTC m=+38.149096274" observedRunningTime="2026-04-17 03:03:51.901174378 +0000 UTC m=+39.223192316" watchObservedRunningTime="2026-04-17 03:03:54.871515731 +0000 UTC m=+42.193533662" Apr 17 03:03:54.877370 containerd[1572]: 2026-04-17 03:03:54.776 [ERROR][4448] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 03:03:54.877370 containerd[1572]: 2026-04-17 03:03:54.786 [INFO][4448] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--c57849b64--mlx2h-eth0 calico-kube-controllers-c57849b64- calico-system e24d71c5-7063-40d8-9b55-b8ca8f5e8578 832 0 2026-04-17 03:03:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c57849b64 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-c57849b64-mlx2h eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califcfc578af34 [] [] }} ContainerID="f50aaef86bf3a8e8d575e3d9940593b16ab5693f4f3fbfc56bc7afd6f84c5779" Namespace="calico-system" Pod="calico-kube-controllers-c57849b64-mlx2h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c57849b64--mlx2h-" Apr 17 03:03:54.877370 containerd[1572]: 2026-04-17 03:03:54.786 [INFO][4448] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f50aaef86bf3a8e8d575e3d9940593b16ab5693f4f3fbfc56bc7afd6f84c5779" Namespace="calico-system" Pod="calico-kube-controllers-c57849b64-mlx2h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c57849b64--mlx2h-eth0" Apr 17 03:03:54.877370 containerd[1572]: 2026-04-17 03:03:54.812 [INFO][4471] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f50aaef86bf3a8e8d575e3d9940593b16ab5693f4f3fbfc56bc7afd6f84c5779" HandleID="k8s-pod-network.f50aaef86bf3a8e8d575e3d9940593b16ab5693f4f3fbfc56bc7afd6f84c5779" Workload="localhost-k8s-calico--kube--controllers--c57849b64--mlx2h-eth0" Apr 17 03:03:54.877561 containerd[1572]: 2026-04-17 03:03:54.820 [INFO][4471] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f50aaef86bf3a8e8d575e3d9940593b16ab5693f4f3fbfc56bc7afd6f84c5779" HandleID="k8s-pod-network.f50aaef86bf3a8e8d575e3d9940593b16ab5693f4f3fbfc56bc7afd6f84c5779" Workload="localhost-k8s-calico--kube--controllers--c57849b64--mlx2h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef910), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-c57849b64-mlx2h", "timestamp":"2026-04-17 03:03:54.812464909 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002b6dc0)} Apr 17 03:03:54.877561 containerd[1572]: 2026-04-17 03:03:54.821 [INFO][4471] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 03:03:54.877561 containerd[1572]: 2026-04-17 03:03:54.821 [INFO][4471] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 03:03:54.877561 containerd[1572]: 2026-04-17 03:03:54.821 [INFO][4471] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 03:03:54.877561 containerd[1572]: 2026-04-17 03:03:54.829 [INFO][4471] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f50aaef86bf3a8e8d575e3d9940593b16ab5693f4f3fbfc56bc7afd6f84c5779" host="localhost" Apr 17 03:03:54.877561 containerd[1572]: 2026-04-17 03:03:54.833 [INFO][4471] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 03:03:54.877561 containerd[1572]: 2026-04-17 03:03:54.837 [INFO][4471] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 03:03:54.877561 containerd[1572]: 2026-04-17 03:03:54.839 [INFO][4471] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 03:03:54.877561 containerd[1572]: 2026-04-17 03:03:54.841 [INFO][4471] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 03:03:54.878677 containerd[1572]: 2026-04-17 03:03:54.841 [INFO][4471] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f50aaef86bf3a8e8d575e3d9940593b16ab5693f4f3fbfc56bc7afd6f84c5779" host="localhost" Apr 17 03:03:54.878677 containerd[1572]: 2026-04-17 03:03:54.844 [INFO][4471] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f50aaef86bf3a8e8d575e3d9940593b16ab5693f4f3fbfc56bc7afd6f84c5779 Apr 17 03:03:54.878677 containerd[1572]: 2026-04-17 03:03:54.848 [INFO][4471] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f50aaef86bf3a8e8d575e3d9940593b16ab5693f4f3fbfc56bc7afd6f84c5779" host="localhost" Apr 17 03:03:54.878677 containerd[1572]: 2026-04-17 03:03:54.858 [INFO][4471] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.f50aaef86bf3a8e8d575e3d9940593b16ab5693f4f3fbfc56bc7afd6f84c5779" host="localhost" Apr 17 03:03:54.878677 containerd[1572]: 2026-04-17 03:03:54.858 [INFO][4471] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.f50aaef86bf3a8e8d575e3d9940593b16ab5693f4f3fbfc56bc7afd6f84c5779" host="localhost" Apr 17 03:03:54.878677 containerd[1572]: 2026-04-17 03:03:54.858 [INFO][4471] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 03:03:54.878677 containerd[1572]: 2026-04-17 03:03:54.858 [INFO][4471] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="f50aaef86bf3a8e8d575e3d9940593b16ab5693f4f3fbfc56bc7afd6f84c5779" HandleID="k8s-pod-network.f50aaef86bf3a8e8d575e3d9940593b16ab5693f4f3fbfc56bc7afd6f84c5779" Workload="localhost-k8s-calico--kube--controllers--c57849b64--mlx2h-eth0" Apr 17 03:03:54.878798 containerd[1572]: 2026-04-17 03:03:54.861 [INFO][4448] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f50aaef86bf3a8e8d575e3d9940593b16ab5693f4f3fbfc56bc7afd6f84c5779" Namespace="calico-system" Pod="calico-kube-controllers-c57849b64-mlx2h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c57849b64--mlx2h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c57849b64--mlx2h-eth0", GenerateName:"calico-kube-controllers-c57849b64-", Namespace:"calico-system", SelfLink:"", UID:"e24d71c5-7063-40d8-9b55-b8ca8f5e8578", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 3, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c57849b64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-c57849b64-mlx2h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califcfc578af34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 03:03:54.878866 containerd[1572]: 2026-04-17 03:03:54.861 [INFO][4448] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="f50aaef86bf3a8e8d575e3d9940593b16ab5693f4f3fbfc56bc7afd6f84c5779" Namespace="calico-system" Pod="calico-kube-controllers-c57849b64-mlx2h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c57849b64--mlx2h-eth0" Apr 17 03:03:54.878866 containerd[1572]: 2026-04-17 03:03:54.861 [INFO][4448] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califcfc578af34 ContainerID="f50aaef86bf3a8e8d575e3d9940593b16ab5693f4f3fbfc56bc7afd6f84c5779" Namespace="calico-system" Pod="calico-kube-controllers-c57849b64-mlx2h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c57849b64--mlx2h-eth0" Apr 17 03:03:54.878866 containerd[1572]: 2026-04-17 03:03:54.863 [INFO][4448] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f50aaef86bf3a8e8d575e3d9940593b16ab5693f4f3fbfc56bc7afd6f84c5779" Namespace="calico-system" Pod="calico-kube-controllers-c57849b64-mlx2h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c57849b64--mlx2h-eth0" Apr 17 03:03:54.878949 containerd[1572]: 2026-04-17 03:03:54.864 [INFO][4448] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f50aaef86bf3a8e8d575e3d9940593b16ab5693f4f3fbfc56bc7afd6f84c5779" Namespace="calico-system" Pod="calico-kube-controllers-c57849b64-mlx2h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c57849b64--mlx2h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c57849b64--mlx2h-eth0", GenerateName:"calico-kube-controllers-c57849b64-", Namespace:"calico-system", SelfLink:"", UID:"e24d71c5-7063-40d8-9b55-b8ca8f5e8578", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 3, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c57849b64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f50aaef86bf3a8e8d575e3d9940593b16ab5693f4f3fbfc56bc7afd6f84c5779", Pod:"calico-kube-controllers-c57849b64-mlx2h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califcfc578af34", MAC:"4e:a6:52:0d:28:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 03:03:54.879003 containerd[1572]: 2026-04-17 03:03:54.873 [INFO][4448] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f50aaef86bf3a8e8d575e3d9940593b16ab5693f4f3fbfc56bc7afd6f84c5779" Namespace="calico-system" Pod="calico-kube-controllers-c57849b64-mlx2h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c57849b64--mlx2h-eth0" Apr 17 03:03:54.899889 containerd[1572]: time="2026-04-17T03:03:54.899836373Z" level=info msg="connecting to shim f50aaef86bf3a8e8d575e3d9940593b16ab5693f4f3fbfc56bc7afd6f84c5779" address="unix:///run/containerd/s/582659b5c527ca02cc6103fdb3fa3288292e3e6096ef01b75c87c697354cb963" namespace=k8s.io protocol=ttrpc version=3 Apr 17 03:03:54.924273 systemd[1]: Started cri-containerd-f50aaef86bf3a8e8d575e3d9940593b16ab5693f4f3fbfc56bc7afd6f84c5779.scope - libcontainer container f50aaef86bf3a8e8d575e3d9940593b16ab5693f4f3fbfc56bc7afd6f84c5779. Apr 17 03:03:54.938142 systemd-resolved[1486]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 03:03:54.958988 systemd-networkd[1483]: cali9c728d3c955: Link UP Apr 17 03:03:54.960114 systemd-networkd[1483]: cali9c728d3c955: Gained carrier Apr 17 03:03:54.975119 containerd[1572]: 2026-04-17 03:03:54.780 [ERROR][4454] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 03:03:54.975119 containerd[1572]: 2026-04-17 03:03:54.788 [INFO][4454] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--57d76c7b76--rl429-eth0 calico-apiserver-57d76c7b76- calico-system a023fec7-9b08-45e6-b187-85e88df49048 830 0 2026-04-17 03:03:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57d76c7b76 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-57d76c7b76-rl429 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali9c728d3c955 [] [] }} ContainerID="45ad57fdb3de7e1a94c1e9a9f9783576a0600fd90a4ec3845c6239bc3bf02d4d" Namespace="calico-system" Pod="calico-apiserver-57d76c7b76-rl429" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d76c7b76--rl429-" Apr 17 03:03:54.975119 containerd[1572]: 2026-04-17 03:03:54.788 [INFO][4454] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="45ad57fdb3de7e1a94c1e9a9f9783576a0600fd90a4ec3845c6239bc3bf02d4d" Namespace="calico-system" Pod="calico-apiserver-57d76c7b76-rl429" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d76c7b76--rl429-eth0" Apr 17 03:03:54.975119 containerd[1572]: 2026-04-17 03:03:54.815 [INFO][4473] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="45ad57fdb3de7e1a94c1e9a9f9783576a0600fd90a4ec3845c6239bc3bf02d4d" HandleID="k8s-pod-network.45ad57fdb3de7e1a94c1e9a9f9783576a0600fd90a4ec3845c6239bc3bf02d4d" Workload="localhost-k8s-calico--apiserver--57d76c7b76--rl429-eth0" Apr 17 03:03:54.975829 containerd[1572]: 2026-04-17 03:03:54.827 [INFO][4473] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="45ad57fdb3de7e1a94c1e9a9f9783576a0600fd90a4ec3845c6239bc3bf02d4d" HandleID="k8s-pod-network.45ad57fdb3de7e1a94c1e9a9f9783576a0600fd90a4ec3845c6239bc3bf02d4d" Workload="localhost-k8s-calico--apiserver--57d76c7b76--rl429-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002efd30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-57d76c7b76-rl429", "timestamp":"2026-04-17 03:03:54.815124778 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002e49a0)} Apr 17 03:03:54.975829 containerd[1572]: 2026-04-17 03:03:54.827 [INFO][4473] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 03:03:54.975829 containerd[1572]: 2026-04-17 03:03:54.859 [INFO][4473] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 03:03:54.975829 containerd[1572]: 2026-04-17 03:03:54.859 [INFO][4473] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 03:03:54.975829 containerd[1572]: 2026-04-17 03:03:54.929 [INFO][4473] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.45ad57fdb3de7e1a94c1e9a9f9783576a0600fd90a4ec3845c6239bc3bf02d4d" host="localhost" Apr 17 03:03:54.975829 containerd[1572]: 2026-04-17 03:03:54.935 [INFO][4473] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 03:03:54.975829 containerd[1572]: 2026-04-17 03:03:54.940 [INFO][4473] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 03:03:54.975829 containerd[1572]: 2026-04-17 03:03:54.941 [INFO][4473] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 03:03:54.975829 containerd[1572]: 2026-04-17 03:03:54.943 [INFO][4473] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 03:03:54.975829 containerd[1572]: 2026-04-17 03:03:54.943 [INFO][4473] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.45ad57fdb3de7e1a94c1e9a9f9783576a0600fd90a4ec3845c6239bc3bf02d4d" host="localhost" Apr 17 03:03:54.976095 containerd[1572]: 2026-04-17 03:03:54.944 [INFO][4473] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.45ad57fdb3de7e1a94c1e9a9f9783576a0600fd90a4ec3845c6239bc3bf02d4d Apr 17 03:03:54.976095 containerd[1572]: 2026-04-17 03:03:54.948 [INFO][4473] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.45ad57fdb3de7e1a94c1e9a9f9783576a0600fd90a4ec3845c6239bc3bf02d4d" host="localhost" Apr 17 03:03:54.976095 containerd[1572]: 2026-04-17 03:03:54.955 [INFO][4473] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.45ad57fdb3de7e1a94c1e9a9f9783576a0600fd90a4ec3845c6239bc3bf02d4d" host="localhost" Apr 17 03:03:54.976095 containerd[1572]: 2026-04-17 03:03:54.955 [INFO][4473] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.45ad57fdb3de7e1a94c1e9a9f9783576a0600fd90a4ec3845c6239bc3bf02d4d" host="localhost" Apr 17 03:03:54.976095 containerd[1572]: 2026-04-17 03:03:54.955 [INFO][4473] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 03:03:54.976095 containerd[1572]: 2026-04-17 03:03:54.955 [INFO][4473] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="45ad57fdb3de7e1a94c1e9a9f9783576a0600fd90a4ec3845c6239bc3bf02d4d" HandleID="k8s-pod-network.45ad57fdb3de7e1a94c1e9a9f9783576a0600fd90a4ec3845c6239bc3bf02d4d" Workload="localhost-k8s-calico--apiserver--57d76c7b76--rl429-eth0" Apr 17 03:03:54.976305 containerd[1572]: 2026-04-17 03:03:54.957 [INFO][4454] cni-plugin/k8s.go 418: Populated endpoint ContainerID="45ad57fdb3de7e1a94c1e9a9f9783576a0600fd90a4ec3845c6239bc3bf02d4d" Namespace="calico-system" Pod="calico-apiserver-57d76c7b76-rl429" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d76c7b76--rl429-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57d76c7b76--rl429-eth0", GenerateName:"calico-apiserver-57d76c7b76-", Namespace:"calico-system", SelfLink:"", UID:"a023fec7-9b08-45e6-b187-85e88df49048", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 3, 3, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d76c7b76", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-57d76c7b76-rl429", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9c728d3c955", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 03:03:54.976460 containerd[1572]: 2026-04-17 03:03:54.957 [INFO][4454] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="45ad57fdb3de7e1a94c1e9a9f9783576a0600fd90a4ec3845c6239bc3bf02d4d" Namespace="calico-system" Pod="calico-apiserver-57d76c7b76-rl429" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d76c7b76--rl429-eth0" Apr 17 03:03:54.976460 containerd[1572]: 2026-04-17 03:03:54.957 [INFO][4454] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9c728d3c955 ContainerID="45ad57fdb3de7e1a94c1e9a9f9783576a0600fd90a4ec3845c6239bc3bf02d4d" Namespace="calico-system" Pod="calico-apiserver-57d76c7b76-rl429" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d76c7b76--rl429-eth0" Apr 17 03:03:54.976460 containerd[1572]: 2026-04-17 03:03:54.963 [INFO][4454] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="45ad57fdb3de7e1a94c1e9a9f9783576a0600fd90a4ec3845c6239bc3bf02d4d" Namespace="calico-system" Pod="calico-apiserver-57d76c7b76-rl429" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d76c7b76--rl429-eth0" Apr 17 03:03:54.976539 containerd[1572]: 2026-04-17 03:03:54.963 [INFO][4454] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="45ad57fdb3de7e1a94c1e9a9f9783576a0600fd90a4ec3845c6239bc3bf02d4d" Namespace="calico-system" Pod="calico-apiserver-57d76c7b76-rl429" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d76c7b76--rl429-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57d76c7b76--rl429-eth0", GenerateName:"calico-apiserver-57d76c7b76-", Namespace:"calico-system", SelfLink:"", UID:"a023fec7-9b08-45e6-b187-85e88df49048", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 3, 3, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d76c7b76", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"45ad57fdb3de7e1a94c1e9a9f9783576a0600fd90a4ec3845c6239bc3bf02d4d", Pod:"calico-apiserver-57d76c7b76-rl429", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9c728d3c955", MAC:"12:65:40:9f:0b:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 03:03:54.976595 containerd[1572]: 2026-04-17 03:03:54.973 [INFO][4454] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="45ad57fdb3de7e1a94c1e9a9f9783576a0600fd90a4ec3845c6239bc3bf02d4d" Namespace="calico-system" Pod="calico-apiserver-57d76c7b76-rl429" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d76c7b76--rl429-eth0" Apr 17 03:03:54.981739 containerd[1572]: time="2026-04-17T03:03:54.981696504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c57849b64-mlx2h,Uid:e24d71c5-7063-40d8-9b55-b8ca8f5e8578,Namespace:calico-system,Attempt:0,} returns sandbox id \"f50aaef86bf3a8e8d575e3d9940593b16ab5693f4f3fbfc56bc7afd6f84c5779\"" Apr 17 03:03:54.983524 containerd[1572]: time="2026-04-17T03:03:54.983470006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 17 03:03:55.003218 containerd[1572]: time="2026-04-17T03:03:55.003125225Z" level=info msg="connecting to shim 45ad57fdb3de7e1a94c1e9a9f9783576a0600fd90a4ec3845c6239bc3bf02d4d" address="unix:///run/containerd/s/35a3e00c37a9f3ec1d518f85247a2c003dee5008166303fb7a3e5eda332ee898" namespace=k8s.io protocol=ttrpc version=3 Apr 17 03:03:55.030138 systemd[1]: Started cri-containerd-45ad57fdb3de7e1a94c1e9a9f9783576a0600fd90a4ec3845c6239bc3bf02d4d.scope - libcontainer container 45ad57fdb3de7e1a94c1e9a9f9783576a0600fd90a4ec3845c6239bc3bf02d4d. Apr 17 03:03:55.039690 systemd-resolved[1486]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 03:03:55.071562 containerd[1572]: time="2026-04-17T03:03:55.071504894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d76c7b76-rl429,Uid:a023fec7-9b08-45e6-b187-85e88df49048,Namespace:calico-system,Attempt:0,} returns sandbox id \"45ad57fdb3de7e1a94c1e9a9f9783576a0600fd90a4ec3845c6239bc3bf02d4d\"" Apr 17 03:03:55.225874 systemd[1]: Started sshd@8-10.0.0.7:22-10.0.0.1:34788.service - OpenSSH per-connection server daemon (10.0.0.1:34788). Apr 17 03:03:55.290246 sshd[4623]: Accepted publickey for core from 10.0.0.1 port 34788 ssh2: RSA SHA256:FVrkeUr4F1DUvuGbghPLjRpHgCWbfVIbP6ixe+jkRU8 Apr 17 03:03:55.291489 sshd-session[4623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 03:03:55.295293 systemd-logind[1551]: New session 9 of user core. Apr 17 03:03:55.305121 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 03:03:55.386676 sshd[4626]: Connection closed by 10.0.0.1 port 34788 Apr 17 03:03:55.387056 sshd-session[4623]: pam_unix(sshd:session): session closed for user core Apr 17 03:03:55.390117 systemd[1]: sshd@8-10.0.0.7:22-10.0.0.1:34788.service: Deactivated successfully. Apr 17 03:03:55.392096 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 03:03:55.393416 systemd-logind[1551]: Session 9 logged out. Waiting for processes to exit. Apr 17 03:03:55.394946 systemd-logind[1551]: Removed session 9. Apr 17 03:03:56.331184 systemd-networkd[1483]: cali9c728d3c955: Gained IPv6LL Apr 17 03:03:56.395284 systemd-networkd[1483]: califcfc578af34: Gained IPv6LL Apr 17 03:03:56.751109 kubelet[2725]: E0417 03:03:56.751071 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:56.751468 containerd[1572]: time="2026-04-17T03:03:56.751428775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sdh6t,Uid:1fe8bbc7-3295-4ad2-bdf8-87ef564a3bb1,Namespace:kube-system,Attempt:0,}" Apr 17 03:03:56.872985 systemd-networkd[1483]: cali7417fbe2426: Link UP Apr 17 03:03:56.875262 systemd-networkd[1483]: cali7417fbe2426: Gained carrier Apr 17 03:03:56.887492 containerd[1572]: 2026-04-17 03:03:56.784 [ERROR][4672] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 03:03:56.887492 containerd[1572]: 2026-04-17 03:03:56.795 [INFO][4672] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--sdh6t-eth0 coredns-66bc5c9577- kube-system 1fe8bbc7-3295-4ad2-bdf8-87ef564a3bb1 823 0 2026-04-17 03:03:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-sdh6t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7417fbe2426 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="be6612a8893635f89aca3151a1cb276a7ceb97e7162f69dc2ddf487583bb1fec" Namespace="kube-system" Pod="coredns-66bc5c9577-sdh6t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--sdh6t-" Apr 17 03:03:56.887492 containerd[1572]: 2026-04-17 03:03:56.795 [INFO][4672] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="be6612a8893635f89aca3151a1cb276a7ceb97e7162f69dc2ddf487583bb1fec" Namespace="kube-system" Pod="coredns-66bc5c9577-sdh6t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--sdh6t-eth0" Apr 17 03:03:56.887492 containerd[1572]: 2026-04-17 03:03:56.827 [INFO][4685] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be6612a8893635f89aca3151a1cb276a7ceb97e7162f69dc2ddf487583bb1fec" HandleID="k8s-pod-network.be6612a8893635f89aca3151a1cb276a7ceb97e7162f69dc2ddf487583bb1fec" Workload="localhost-k8s-coredns--66bc5c9577--sdh6t-eth0" Apr 17 03:03:56.887770 containerd[1572]: 2026-04-17 03:03:56.832 [INFO][4685] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="be6612a8893635f89aca3151a1cb276a7ceb97e7162f69dc2ddf487583bb1fec" HandleID="k8s-pod-network.be6612a8893635f89aca3151a1cb276a7ceb97e7162f69dc2ddf487583bb1fec" Workload="localhost-k8s-coredns--66bc5c9577--sdh6t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fbb0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-sdh6t", "timestamp":"2026-04-17 03:03:56.827218484 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003f3ce0)} Apr 17 03:03:56.887770 containerd[1572]: 2026-04-17 03:03:56.833 [INFO][4685] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 03:03:56.887770 containerd[1572]: 2026-04-17 03:03:56.833 [INFO][4685] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 03:03:56.887770 containerd[1572]: 2026-04-17 03:03:56.833 [INFO][4685] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 03:03:56.887770 containerd[1572]: 2026-04-17 03:03:56.835 [INFO][4685] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.be6612a8893635f89aca3151a1cb276a7ceb97e7162f69dc2ddf487583bb1fec" host="localhost" Apr 17 03:03:56.887770 containerd[1572]: 2026-04-17 03:03:56.840 [INFO][4685] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 03:03:56.887770 containerd[1572]: 2026-04-17 03:03:56.844 [INFO][4685] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 03:03:56.887770 containerd[1572]: 2026-04-17 03:03:56.846 [INFO][4685] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 03:03:56.887770 containerd[1572]: 2026-04-17 03:03:56.848 [INFO][4685] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 03:03:56.887770 containerd[1572]: 2026-04-17 03:03:56.848 [INFO][4685] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.be6612a8893635f89aca3151a1cb276a7ceb97e7162f69dc2ddf487583bb1fec" host="localhost" Apr 17 03:03:56.888020 containerd[1572]: 2026-04-17 03:03:56.851 [INFO][4685] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.be6612a8893635f89aca3151a1cb276a7ceb97e7162f69dc2ddf487583bb1fec Apr 17 03:03:56.888020 containerd[1572]: 2026-04-17 03:03:56.858 [INFO][4685] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.be6612a8893635f89aca3151a1cb276a7ceb97e7162f69dc2ddf487583bb1fec" host="localhost" Apr 17 03:03:56.888020 containerd[1572]: 2026-04-17 03:03:56.866 [INFO][4685] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.be6612a8893635f89aca3151a1cb276a7ceb97e7162f69dc2ddf487583bb1fec" host="localhost" Apr 17 03:03:56.888020 containerd[1572]: 2026-04-17 03:03:56.866 [INFO][4685] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.be6612a8893635f89aca3151a1cb276a7ceb97e7162f69dc2ddf487583bb1fec" host="localhost" Apr 17 03:03:56.888020 containerd[1572]: 2026-04-17 03:03:56.866 [INFO][4685] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 03:03:56.888020 containerd[1572]: 2026-04-17 03:03:56.866 [INFO][4685] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="be6612a8893635f89aca3151a1cb276a7ceb97e7162f69dc2ddf487583bb1fec" HandleID="k8s-pod-network.be6612a8893635f89aca3151a1cb276a7ceb97e7162f69dc2ddf487583bb1fec" Workload="localhost-k8s-coredns--66bc5c9577--sdh6t-eth0" Apr 17 03:03:56.888112 containerd[1572]: 2026-04-17 03:03:56.868 [INFO][4672] cni-plugin/k8s.go 418: Populated endpoint ContainerID="be6612a8893635f89aca3151a1cb276a7ceb97e7162f69dc2ddf487583bb1fec" Namespace="kube-system" Pod="coredns-66bc5c9577-sdh6t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--sdh6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--sdh6t-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1fe8bbc7-3295-4ad2-bdf8-87ef564a3bb1", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 3, 3, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-sdh6t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7417fbe2426", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 03:03:56.888112 containerd[1572]: 2026-04-17 03:03:56.868 [INFO][4672] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="be6612a8893635f89aca3151a1cb276a7ceb97e7162f69dc2ddf487583bb1fec" Namespace="kube-system" Pod="coredns-66bc5c9577-sdh6t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--sdh6t-eth0" Apr 17 03:03:56.888112 containerd[1572]: 2026-04-17 03:03:56.868 [INFO][4672] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7417fbe2426 ContainerID="be6612a8893635f89aca3151a1cb276a7ceb97e7162f69dc2ddf487583bb1fec" Namespace="kube-system" Pod="coredns-66bc5c9577-sdh6t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--sdh6t-eth0" Apr 17 03:03:56.888112 containerd[1572]: 2026-04-17 03:03:56.874 [INFO][4672] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be6612a8893635f89aca3151a1cb276a7ceb97e7162f69dc2ddf487583bb1fec" Namespace="kube-system" Pod="coredns-66bc5c9577-sdh6t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--sdh6t-eth0" Apr 17 03:03:56.888112 containerd[1572]: 2026-04-17 03:03:56.874 [INFO][4672] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="be6612a8893635f89aca3151a1cb276a7ceb97e7162f69dc2ddf487583bb1fec" Namespace="kube-system" Pod="coredns-66bc5c9577-sdh6t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--sdh6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--sdh6t-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1fe8bbc7-3295-4ad2-bdf8-87ef564a3bb1", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 3, 3, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be6612a8893635f89aca3151a1cb276a7ceb97e7162f69dc2ddf487583bb1fec", Pod:"coredns-66bc5c9577-sdh6t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7417fbe2426", MAC:"ae:e6:24:a9:6e:ce", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 03:03:56.888112 containerd[1572]: 2026-04-17 03:03:56.885 [INFO][4672] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="be6612a8893635f89aca3151a1cb276a7ceb97e7162f69dc2ddf487583bb1fec" Namespace="kube-system" Pod="coredns-66bc5c9577-sdh6t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--sdh6t-eth0" Apr 17 03:03:56.925250 containerd[1572]: time="2026-04-17T03:03:56.925196567Z" level=info msg="connecting to shim be6612a8893635f89aca3151a1cb276a7ceb97e7162f69dc2ddf487583bb1fec" address="unix:///run/containerd/s/d21ca20e382979a91d8ecf3a46c7e6c6d20a41887ec86c8db9d56f847ff8b700" namespace=k8s.io protocol=ttrpc version=3 Apr 17 03:03:56.953189 systemd[1]: Started cri-containerd-be6612a8893635f89aca3151a1cb276a7ceb97e7162f69dc2ddf487583bb1fec.scope - libcontainer container be6612a8893635f89aca3151a1cb276a7ceb97e7162f69dc2ddf487583bb1fec. Apr 17 03:03:56.965136 systemd-resolved[1486]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 03:03:56.983448 kubelet[2725]: I0417 03:03:56.983385 2725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 03:03:57.024558 containerd[1572]: time="2026-04-17T03:03:57.024423912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sdh6t,Uid:1fe8bbc7-3295-4ad2-bdf8-87ef564a3bb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"be6612a8893635f89aca3151a1cb276a7ceb97e7162f69dc2ddf487583bb1fec\"" Apr 17 03:03:57.026126 kubelet[2725]: E0417 03:03:57.026076 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:57.030043 containerd[1572]: time="2026-04-17T03:03:57.030020681Z" level=info msg="CreateContainer within sandbox \"be6612a8893635f89aca3151a1cb276a7ceb97e7162f69dc2ddf487583bb1fec\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 03:03:57.037210 containerd[1572]: time="2026-04-17T03:03:57.037163377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:57.038880 containerd[1572]: time="2026-04-17T03:03:57.038818633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 17 03:03:57.043949 containerd[1572]: time="2026-04-17T03:03:57.043725162Z" level=info msg="Container 922785da51d95255282a7490d70f96aec54e9556cbc587ab1aaf28b5af18caa1: CDI devices from CRI Config.CDIDevices: []" Apr 17 03:03:57.044021 containerd[1572]: time="2026-04-17T03:03:57.043971871Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:57.047717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1259828794.mount: Deactivated successfully. Apr 17 03:03:57.050680 containerd[1572]: time="2026-04-17T03:03:57.050314703Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:57.050748 containerd[1572]: time="2026-04-17T03:03:57.050735379Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.06720546s" Apr 17 03:03:57.050780 containerd[1572]: time="2026-04-17T03:03:57.050757714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 17 03:03:57.052657 containerd[1572]: time="2026-04-17T03:03:57.052614722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 03:03:57.059374 containerd[1572]: time="2026-04-17T03:03:57.059126308Z" level=info msg="CreateContainer within sandbox \"be6612a8893635f89aca3151a1cb276a7ceb97e7162f69dc2ddf487583bb1fec\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"922785da51d95255282a7490d70f96aec54e9556cbc587ab1aaf28b5af18caa1\"" Apr 17 03:03:57.061575 containerd[1572]: time="2026-04-17T03:03:57.061541453Z" level=info msg="StartContainer for \"922785da51d95255282a7490d70f96aec54e9556cbc587ab1aaf28b5af18caa1\"" Apr 17 03:03:57.063236 containerd[1572]: time="2026-04-17T03:03:57.063196083Z" level=info msg="connecting to shim 922785da51d95255282a7490d70f96aec54e9556cbc587ab1aaf28b5af18caa1" address="unix:///run/containerd/s/d21ca20e382979a91d8ecf3a46c7e6c6d20a41887ec86c8db9d56f847ff8b700" protocol=ttrpc version=3 Apr 17 03:03:57.067802 containerd[1572]: time="2026-04-17T03:03:57.067547575Z" level=info msg="CreateContainer within sandbox \"f50aaef86bf3a8e8d575e3d9940593b16ab5693f4f3fbfc56bc7afd6f84c5779\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 17 03:03:57.078608 containerd[1572]: time="2026-04-17T03:03:57.078564045Z" level=info msg="Container 38a5d7dd8936e714a97ad80bde614bfb4607f79283a3a4376b4a6b3efce13561: CDI devices from CRI Config.CDIDevices: []" Apr 17 03:03:57.088186 systemd[1]: Started cri-containerd-922785da51d95255282a7490d70f96aec54e9556cbc587ab1aaf28b5af18caa1.scope - libcontainer container 922785da51d95255282a7490d70f96aec54e9556cbc587ab1aaf28b5af18caa1. Apr 17 03:03:57.090605 containerd[1572]: time="2026-04-17T03:03:57.090555294Z" level=info msg="CreateContainer within sandbox \"f50aaef86bf3a8e8d575e3d9940593b16ab5693f4f3fbfc56bc7afd6f84c5779\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"38a5d7dd8936e714a97ad80bde614bfb4607f79283a3a4376b4a6b3efce13561\"" Apr 17 03:03:57.091036 containerd[1572]: time="2026-04-17T03:03:57.090981362Z" level=info msg="StartContainer for \"38a5d7dd8936e714a97ad80bde614bfb4607f79283a3a4376b4a6b3efce13561\"" Apr 17 03:03:57.092319 containerd[1572]: time="2026-04-17T03:03:57.092190682Z" level=info msg="connecting to shim 38a5d7dd8936e714a97ad80bde614bfb4607f79283a3a4376b4a6b3efce13561" address="unix:///run/containerd/s/582659b5c527ca02cc6103fdb3fa3288292e3e6096ef01b75c87c697354cb963" protocol=ttrpc version=3 Apr 17 03:03:57.117178 systemd[1]: Started cri-containerd-38a5d7dd8936e714a97ad80bde614bfb4607f79283a3a4376b4a6b3efce13561.scope - libcontainer container 38a5d7dd8936e714a97ad80bde614bfb4607f79283a3a4376b4a6b3efce13561. Apr 17 03:03:57.132220 containerd[1572]: time="2026-04-17T03:03:57.132145978Z" level=info msg="StartContainer for \"922785da51d95255282a7490d70f96aec54e9556cbc587ab1aaf28b5af18caa1\" returns successfully" Apr 17 03:03:57.174686 containerd[1572]: time="2026-04-17T03:03:57.174554974Z" level=info msg="StartContainer for \"38a5d7dd8936e714a97ad80bde614bfb4607f79283a3a4376b4a6b3efce13561\" returns successfully" Apr 17 03:03:57.750977 kubelet[2725]: E0417 03:03:57.750732 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:57.751459 containerd[1572]: time="2026-04-17T03:03:57.751319827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pnqxl,Uid:c2693bff-f36f-4e79-8b27-a87e39664a97,Namespace:kube-system,Attempt:0,}" Apr 17 03:03:57.752543 containerd[1572]: time="2026-04-17T03:03:57.752489811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-5h9fm,Uid:8bf2c43b-809d-4d53-8950-e87873e687fe,Namespace:calico-system,Attempt:0,}" Apr 17 03:03:57.850854 systemd-networkd[1483]: cali8d92e628f01: Link UP Apr 17 03:03:57.851147 systemd-networkd[1483]: cali8d92e628f01: Gained carrier Apr 17 03:03:57.867852 containerd[1572]: 2026-04-17 03:03:57.778 [ERROR][4905] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 03:03:57.867852 containerd[1572]: 2026-04-17 03:03:57.793 [INFO][4905] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--cccfbd5cf--5h9fm-eth0 goldmane-cccfbd5cf- calico-system 8bf2c43b-809d-4d53-8950-e87873e687fe 833 0 2026-04-17 03:03:28 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-cccfbd5cf-5h9fm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8d92e628f01 [] [] }} ContainerID="f4e99b276e51711b7a2e466b2a75d0f6fc75a48923476c97efc03a4c3fb8e9ca" Namespace="calico-system" Pod="goldmane-cccfbd5cf-5h9fm" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--5h9fm-" Apr 17 03:03:57.867852 containerd[1572]: 2026-04-17 03:03:57.793 [INFO][4905] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f4e99b276e51711b7a2e466b2a75d0f6fc75a48923476c97efc03a4c3fb8e9ca" Namespace="calico-system" Pod="goldmane-cccfbd5cf-5h9fm" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--5h9fm-eth0" Apr 17 03:03:57.867852 containerd[1572]: 2026-04-17 03:03:57.816 [INFO][4932] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f4e99b276e51711b7a2e466b2a75d0f6fc75a48923476c97efc03a4c3fb8e9ca" HandleID="k8s-pod-network.f4e99b276e51711b7a2e466b2a75d0f6fc75a48923476c97efc03a4c3fb8e9ca" Workload="localhost-k8s-goldmane--cccfbd5cf--5h9fm-eth0" Apr 17 03:03:57.867852 containerd[1572]: 2026-04-17 03:03:57.824 [INFO][4932] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f4e99b276e51711b7a2e466b2a75d0f6fc75a48923476c97efc03a4c3fb8e9ca" HandleID="k8s-pod-network.f4e99b276e51711b7a2e466b2a75d0f6fc75a48923476c97efc03a4c3fb8e9ca" Workload="localhost-k8s-goldmane--cccfbd5cf--5h9fm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000369b70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-cccfbd5cf-5h9fm", "timestamp":"2026-04-17 03:03:57.816599006 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002071e0)} Apr 17 03:03:57.867852 containerd[1572]: 2026-04-17 03:03:57.824 [INFO][4932] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 03:03:57.867852 containerd[1572]: 2026-04-17 03:03:57.824 [INFO][4932] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 03:03:57.867852 containerd[1572]: 2026-04-17 03:03:57.824 [INFO][4932] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 03:03:57.867852 containerd[1572]: 2026-04-17 03:03:57.826 [INFO][4932] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f4e99b276e51711b7a2e466b2a75d0f6fc75a48923476c97efc03a4c3fb8e9ca" host="localhost" Apr 17 03:03:57.867852 containerd[1572]: 2026-04-17 03:03:57.830 [INFO][4932] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 03:03:57.867852 containerd[1572]: 2026-04-17 03:03:57.833 [INFO][4932] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 03:03:57.867852 containerd[1572]: 2026-04-17 03:03:57.834 [INFO][4932] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 03:03:57.867852 containerd[1572]: 2026-04-17 03:03:57.836 [INFO][4932] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 03:03:57.867852 containerd[1572]: 2026-04-17 03:03:57.836 [INFO][4932] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f4e99b276e51711b7a2e466b2a75d0f6fc75a48923476c97efc03a4c3fb8e9ca" host="localhost" Apr 17 03:03:57.867852 containerd[1572]: 2026-04-17 03:03:57.837 [INFO][4932] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f4e99b276e51711b7a2e466b2a75d0f6fc75a48923476c97efc03a4c3fb8e9ca Apr 17 03:03:57.867852 containerd[1572]: 2026-04-17 03:03:57.841 [INFO][4932] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f4e99b276e51711b7a2e466b2a75d0f6fc75a48923476c97efc03a4c3fb8e9ca" host="localhost" Apr 17 03:03:57.867852 containerd[1572]: 2026-04-17 03:03:57.846 [INFO][4932] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.f4e99b276e51711b7a2e466b2a75d0f6fc75a48923476c97efc03a4c3fb8e9ca" host="localhost" Apr 17 03:03:57.867852 containerd[1572]: 2026-04-17 03:03:57.847 [INFO][4932] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.f4e99b276e51711b7a2e466b2a75d0f6fc75a48923476c97efc03a4c3fb8e9ca" host="localhost" Apr 17 03:03:57.867852 containerd[1572]: 2026-04-17 03:03:57.847 [INFO][4932] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 03:03:57.867852 containerd[1572]: 2026-04-17 03:03:57.847 [INFO][4932] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="f4e99b276e51711b7a2e466b2a75d0f6fc75a48923476c97efc03a4c3fb8e9ca" HandleID="k8s-pod-network.f4e99b276e51711b7a2e466b2a75d0f6fc75a48923476c97efc03a4c3fb8e9ca" Workload="localhost-k8s-goldmane--cccfbd5cf--5h9fm-eth0" Apr 17 03:03:57.868608 containerd[1572]: 2026-04-17 03:03:57.849 [INFO][4905] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f4e99b276e51711b7a2e466b2a75d0f6fc75a48923476c97efc03a4c3fb8e9ca" Namespace="calico-system" Pod="goldmane-cccfbd5cf-5h9fm" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--5h9fm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--5h9fm-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8bf2c43b-809d-4d53-8950-e87873e687fe", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 3, 3, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-cccfbd5cf-5h9fm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8d92e628f01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 03:03:57.868608 containerd[1572]: 2026-04-17 03:03:57.849 [INFO][4905] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="f4e99b276e51711b7a2e466b2a75d0f6fc75a48923476c97efc03a4c3fb8e9ca" Namespace="calico-system" Pod="goldmane-cccfbd5cf-5h9fm" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--5h9fm-eth0" Apr 17 03:03:57.868608 containerd[1572]: 2026-04-17 03:03:57.849 [INFO][4905] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d92e628f01 ContainerID="f4e99b276e51711b7a2e466b2a75d0f6fc75a48923476c97efc03a4c3fb8e9ca" Namespace="calico-system" Pod="goldmane-cccfbd5cf-5h9fm" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--5h9fm-eth0" Apr 17 03:03:57.868608 containerd[1572]: 2026-04-17 03:03:57.851 [INFO][4905] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f4e99b276e51711b7a2e466b2a75d0f6fc75a48923476c97efc03a4c3fb8e9ca" Namespace="calico-system" Pod="goldmane-cccfbd5cf-5h9fm" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--5h9fm-eth0" Apr 17 03:03:57.868608 containerd[1572]: 2026-04-17 03:03:57.851 [INFO][4905] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f4e99b276e51711b7a2e466b2a75d0f6fc75a48923476c97efc03a4c3fb8e9ca" Namespace="calico-system" Pod="goldmane-cccfbd5cf-5h9fm" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--5h9fm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--5h9fm-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8bf2c43b-809d-4d53-8950-e87873e687fe", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 3, 3, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f4e99b276e51711b7a2e466b2a75d0f6fc75a48923476c97efc03a4c3fb8e9ca", Pod:"goldmane-cccfbd5cf-5h9fm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8d92e628f01", MAC:"f2:ec:34:72:cb:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 03:03:57.868608 containerd[1572]: 2026-04-17 03:03:57.859 [INFO][4905] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f4e99b276e51711b7a2e466b2a75d0f6fc75a48923476c97efc03a4c3fb8e9ca" Namespace="calico-system" Pod="goldmane-cccfbd5cf-5h9fm" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--5h9fm-eth0" Apr 17 03:03:57.893176 containerd[1572]: time="2026-04-17T03:03:57.893132188Z" level=info msg="connecting to shim f4e99b276e51711b7a2e466b2a75d0f6fc75a48923476c97efc03a4c3fb8e9ca" address="unix:///run/containerd/s/e50a7f8026035a246af46629da34bf9ee7d56190ccf3f0feec739c71de508cea" namespace=k8s.io protocol=ttrpc version=3 Apr 17 03:03:57.905803 kubelet[2725]: E0417 03:03:57.905709 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:57.917933 kubelet[2725]: I0417 03:03:57.917801 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c57849b64-mlx2h" podStartSLOduration=26.849168104 podStartE2EDuration="28.917740999s" podCreationTimestamp="2026-04-17 03:03:29 +0000 UTC" firstStartedPulling="2026-04-17 03:03:54.983068627 +0000 UTC m=+42.305086560" lastFinishedPulling="2026-04-17 03:03:57.051641523 +0000 UTC m=+44.373659455" observedRunningTime="2026-04-17 03:03:57.916900792 +0000 UTC m=+45.238918726" watchObservedRunningTime="2026-04-17 03:03:57.917740999 +0000 UTC m=+45.239758939" Apr 17 03:03:57.921071 systemd[1]: Started cri-containerd-f4e99b276e51711b7a2e466b2a75d0f6fc75a48923476c97efc03a4c3fb8e9ca.scope - libcontainer container f4e99b276e51711b7a2e466b2a75d0f6fc75a48923476c97efc03a4c3fb8e9ca. Apr 17 03:03:57.931546 kubelet[2725]: I0417 03:03:57.931254 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-sdh6t" podStartSLOduration=37.931230211 podStartE2EDuration="37.931230211s" podCreationTimestamp="2026-04-17 03:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 03:03:57.927489724 +0000 UTC m=+45.249507667" watchObservedRunningTime="2026-04-17 03:03:57.931230211 +0000 UTC m=+45.253248155" Apr 17 03:03:57.950006 systemd-resolved[1486]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 03:03:57.987037 systemd-networkd[1483]: calibc15f940f8c: Link UP Apr 17 03:03:57.988564 containerd[1572]: time="2026-04-17T03:03:57.987652635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-5h9fm,Uid:8bf2c43b-809d-4d53-8950-e87873e687fe,Namespace:calico-system,Attempt:0,} returns sandbox id \"f4e99b276e51711b7a2e466b2a75d0f6fc75a48923476c97efc03a4c3fb8e9ca\"" Apr 17 03:03:57.988049 systemd-networkd[1483]: calibc15f940f8c: Gained carrier Apr 17 03:03:58.001633 containerd[1572]: 2026-04-17 03:03:57.780 [ERROR][4902] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 03:03:58.001633 containerd[1572]: 2026-04-17 03:03:57.793 [INFO][4902] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--pnqxl-eth0 coredns-66bc5c9577- kube-system c2693bff-f36f-4e79-8b27-a87e39664a97 828 0 2026-04-17 03:03:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-pnqxl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibc15f940f8c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="97481c34f21c9b4e1597673083ffbf462650876969dc294df7d020a396d8ec37" Namespace="kube-system" Pod="coredns-66bc5c9577-pnqxl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pnqxl-" Apr 17 03:03:58.001633 containerd[1572]: 2026-04-17 03:03:57.793 [INFO][4902] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="97481c34f21c9b4e1597673083ffbf462650876969dc294df7d020a396d8ec37" Namespace="kube-system" Pod="coredns-66bc5c9577-pnqxl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pnqxl-eth0" Apr 17 03:03:58.001633 containerd[1572]: 2026-04-17 03:03:57.818 [INFO][4930] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="97481c34f21c9b4e1597673083ffbf462650876969dc294df7d020a396d8ec37" HandleID="k8s-pod-network.97481c34f21c9b4e1597673083ffbf462650876969dc294df7d020a396d8ec37" Workload="localhost-k8s-coredns--66bc5c9577--pnqxl-eth0" Apr 17 03:03:58.001633 containerd[1572]: 2026-04-17 03:03:57.826 [INFO][4930] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="97481c34f21c9b4e1597673083ffbf462650876969dc294df7d020a396d8ec37" HandleID="k8s-pod-network.97481c34f21c9b4e1597673083ffbf462650876969dc294df7d020a396d8ec37" Workload="localhost-k8s-coredns--66bc5c9577--pnqxl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004a1a10), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-pnqxl", "timestamp":"2026-04-17 03:03:57.818848462 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00060c000)} Apr 17 03:03:58.001633 containerd[1572]: 2026-04-17 03:03:57.826 [INFO][4930] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 03:03:58.001633 containerd[1572]: 2026-04-17 03:03:57.847 [INFO][4930] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 03:03:58.001633 containerd[1572]: 2026-04-17 03:03:57.847 [INFO][4930] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 03:03:58.001633 containerd[1572]: 2026-04-17 03:03:57.930 [INFO][4930] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.97481c34f21c9b4e1597673083ffbf462650876969dc294df7d020a396d8ec37" host="localhost" Apr 17 03:03:58.001633 containerd[1572]: 2026-04-17 03:03:57.939 [INFO][4930] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 03:03:58.001633 containerd[1572]: 2026-04-17 03:03:57.948 [INFO][4930] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 03:03:58.001633 containerd[1572]: 2026-04-17 03:03:57.950 [INFO][4930] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 03:03:58.001633 containerd[1572]: 2026-04-17 03:03:57.955 [INFO][4930] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 03:03:58.001633 containerd[1572]: 2026-04-17 03:03:57.955 [INFO][4930] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.97481c34f21c9b4e1597673083ffbf462650876969dc294df7d020a396d8ec37" host="localhost" Apr 17 03:03:58.001633 containerd[1572]: 2026-04-17 03:03:57.957 [INFO][4930] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.97481c34f21c9b4e1597673083ffbf462650876969dc294df7d020a396d8ec37 Apr 17 03:03:58.001633 containerd[1572]: 2026-04-17 03:03:57.962 [INFO][4930] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.97481c34f21c9b4e1597673083ffbf462650876969dc294df7d020a396d8ec37" host="localhost" Apr 17 03:03:58.001633 containerd[1572]: 2026-04-17 03:03:57.975 [INFO][4930] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.97481c34f21c9b4e1597673083ffbf462650876969dc294df7d020a396d8ec37" host="localhost" Apr 17 03:03:58.001633 containerd[1572]: 2026-04-17 03:03:57.975 [INFO][4930] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.97481c34f21c9b4e1597673083ffbf462650876969dc294df7d020a396d8ec37" host="localhost" Apr 17 03:03:58.001633 containerd[1572]: 2026-04-17 03:03:57.975 [INFO][4930] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 03:03:58.001633 containerd[1572]: 2026-04-17 03:03:57.975 [INFO][4930] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="97481c34f21c9b4e1597673083ffbf462650876969dc294df7d020a396d8ec37" HandleID="k8s-pod-network.97481c34f21c9b4e1597673083ffbf462650876969dc294df7d020a396d8ec37" Workload="localhost-k8s-coredns--66bc5c9577--pnqxl-eth0" Apr 17 03:03:58.002213 containerd[1572]: 2026-04-17 03:03:57.979 [INFO][4902] cni-plugin/k8s.go 418: Populated endpoint ContainerID="97481c34f21c9b4e1597673083ffbf462650876969dc294df7d020a396d8ec37" Namespace="kube-system" Pod="coredns-66bc5c9577-pnqxl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pnqxl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--pnqxl-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c2693bff-f36f-4e79-8b27-a87e39664a97", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 3, 3, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-pnqxl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc15f940f8c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 03:03:58.002213 containerd[1572]: 2026-04-17 03:03:57.979 [INFO][4902] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="97481c34f21c9b4e1597673083ffbf462650876969dc294df7d020a396d8ec37" Namespace="kube-system" Pod="coredns-66bc5c9577-pnqxl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pnqxl-eth0" Apr 17 03:03:58.002213 containerd[1572]: 2026-04-17 03:03:57.980 [INFO][4902] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc15f940f8c ContainerID="97481c34f21c9b4e1597673083ffbf462650876969dc294df7d020a396d8ec37" Namespace="kube-system" Pod="coredns-66bc5c9577-pnqxl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pnqxl-eth0" Apr 17 03:03:58.002213 containerd[1572]: 2026-04-17 03:03:57.985 [INFO][4902] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="97481c34f21c9b4e1597673083ffbf462650876969dc294df7d020a396d8ec37" Namespace="kube-system" Pod="coredns-66bc5c9577-pnqxl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pnqxl-eth0" Apr 17 03:03:58.002213 containerd[1572]: 2026-04-17 03:03:57.988 [INFO][4902] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="97481c34f21c9b4e1597673083ffbf462650876969dc294df7d020a396d8ec37" Namespace="kube-system" Pod="coredns-66bc5c9577-pnqxl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pnqxl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--pnqxl-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c2693bff-f36f-4e79-8b27-a87e39664a97", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 3, 3, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"97481c34f21c9b4e1597673083ffbf462650876969dc294df7d020a396d8ec37", Pod:"coredns-66bc5c9577-pnqxl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc15f940f8c", MAC:"0e:26:52:30:40:20", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 03:03:58.002213 containerd[1572]: 2026-04-17 03:03:57.999 [INFO][4902] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="97481c34f21c9b4e1597673083ffbf462650876969dc294df7d020a396d8ec37" Namespace="kube-system" Pod="coredns-66bc5c9577-pnqxl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pnqxl-eth0" Apr 17 03:03:58.007800 kubelet[2725]: I0417 03:03:58.007692 2725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 03:03:58.008167 kubelet[2725]: E0417 03:03:58.008139 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:58.023661 containerd[1572]: time="2026-04-17T03:03:58.023279142Z" level=info msg="connecting to shim 97481c34f21c9b4e1597673083ffbf462650876969dc294df7d020a396d8ec37" address="unix:///run/containerd/s/ffd683c63ef9b2aaf87646a7887bb4280f0143adf9122c67c9e30b856139c716" namespace=k8s.io protocol=ttrpc version=3 Apr 17 03:03:58.052142 systemd[1]: Started cri-containerd-97481c34f21c9b4e1597673083ffbf462650876969dc294df7d020a396d8ec37.scope - libcontainer container 97481c34f21c9b4e1597673083ffbf462650876969dc294df7d020a396d8ec37. Apr 17 03:03:58.063757 systemd-resolved[1486]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 03:03:58.092428 containerd[1572]: time="2026-04-17T03:03:58.092372416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pnqxl,Uid:c2693bff-f36f-4e79-8b27-a87e39664a97,Namespace:kube-system,Attempt:0,} returns sandbox id \"97481c34f21c9b4e1597673083ffbf462650876969dc294df7d020a396d8ec37\"" Apr 17 03:03:58.093480 kubelet[2725]: E0417 03:03:58.093423 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:58.102820 containerd[1572]: time="2026-04-17T03:03:58.102776476Z" level=info msg="CreateContainer within sandbox \"97481c34f21c9b4e1597673083ffbf462650876969dc294df7d020a396d8ec37\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 03:03:58.118334 containerd[1572]: time="2026-04-17T03:03:58.118302330Z" level=info msg="Container 018adc034e6bd0dd6dec427b1905ee3d6895cc43750962f5657ed2ff125a9dad: CDI devices from CRI Config.CDIDevices: []" Apr 17 03:03:58.125749 containerd[1572]: time="2026-04-17T03:03:58.125497106Z" level=info msg="CreateContainer within sandbox \"97481c34f21c9b4e1597673083ffbf462650876969dc294df7d020a396d8ec37\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"018adc034e6bd0dd6dec427b1905ee3d6895cc43750962f5657ed2ff125a9dad\"" Apr 17 03:03:58.127214 containerd[1572]: time="2026-04-17T03:03:58.127074000Z" level=info msg="StartContainer for \"018adc034e6bd0dd6dec427b1905ee3d6895cc43750962f5657ed2ff125a9dad\"" Apr 17 03:03:58.128599 containerd[1572]: time="2026-04-17T03:03:58.128396920Z" level=info msg="connecting to shim 018adc034e6bd0dd6dec427b1905ee3d6895cc43750962f5657ed2ff125a9dad" address="unix:///run/containerd/s/ffd683c63ef9b2aaf87646a7887bb4280f0143adf9122c67c9e30b856139c716" protocol=ttrpc version=3 Apr 17 03:03:58.163247 systemd[1]: Started cri-containerd-018adc034e6bd0dd6dec427b1905ee3d6895cc43750962f5657ed2ff125a9dad.scope - libcontainer container 018adc034e6bd0dd6dec427b1905ee3d6895cc43750962f5657ed2ff125a9dad. Apr 17 03:03:58.204962 containerd[1572]: time="2026-04-17T03:03:58.204887874Z" level=info msg="StartContainer for \"018adc034e6bd0dd6dec427b1905ee3d6895cc43750962f5657ed2ff125a9dad\" returns successfully" Apr 17 03:03:58.251101 systemd-networkd[1483]: cali7417fbe2426: Gained IPv6LL Apr 17 03:03:58.677318 systemd-networkd[1483]: vxlan.calico: Link UP Apr 17 03:03:58.677335 systemd-networkd[1483]: vxlan.calico: Gained carrier Apr 17 03:03:58.912007 kubelet[2725]: E0417 03:03:58.911346 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:58.918089 kubelet[2725]: E0417 03:03:58.917727 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:58.918089 kubelet[2725]: E0417 03:03:58.918032 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:59.032019 kubelet[2725]: I0417 03:03:59.030903 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-pnqxl" podStartSLOduration=39.030880316 podStartE2EDuration="39.030880316s" podCreationTimestamp="2026-04-17 03:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 03:03:59.007021573 +0000 UTC m=+46.329039521" watchObservedRunningTime="2026-04-17 03:03:59.030880316 +0000 UTC m=+46.352898275" Apr 17 03:03:59.750551 containerd[1572]: time="2026-04-17T03:03:59.750490236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d76c7b76-gmmgs,Uid:b8e9572e-cd21-4783-81eb-03cb12ebcc87,Namespace:calico-system,Attempt:0,}" Apr 17 03:03:59.787206 systemd-networkd[1483]: calibc15f940f8c: Gained IPv6LL Apr 17 03:03:59.787710 systemd-networkd[1483]: cali8d92e628f01: Gained IPv6LL Apr 17 03:03:59.852161 systemd-networkd[1483]: vxlan.calico: Gained IPv6LL Apr 17 03:03:59.884647 systemd-networkd[1483]: cali11bc295af90: Link UP Apr 17 03:03:59.884753 systemd-networkd[1483]: cali11bc295af90: Gained carrier Apr 17 03:03:59.902961 containerd[1572]: 2026-04-17 03:03:59.788 [INFO][5271] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--57d76c7b76--gmmgs-eth0 calico-apiserver-57d76c7b76- calico-system b8e9572e-cd21-4783-81eb-03cb12ebcc87 829 0 2026-04-17 03:03:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57d76c7b76 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-57d76c7b76-gmmgs eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali11bc295af90 [] [] }} ContainerID="e5c027e9ca75156d57b47e8ecb85a72dec220221ba2932a3dd88173d5ca0158a" Namespace="calico-system" Pod="calico-apiserver-57d76c7b76-gmmgs" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d76c7b76--gmmgs-" Apr 17 03:03:59.902961 containerd[1572]: 2026-04-17 03:03:59.788 [INFO][5271] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e5c027e9ca75156d57b47e8ecb85a72dec220221ba2932a3dd88173d5ca0158a" Namespace="calico-system" Pod="calico-apiserver-57d76c7b76-gmmgs" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d76c7b76--gmmgs-eth0" Apr 17 03:03:59.902961 containerd[1572]: 2026-04-17 03:03:59.829 [INFO][5287] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5c027e9ca75156d57b47e8ecb85a72dec220221ba2932a3dd88173d5ca0158a" HandleID="k8s-pod-network.e5c027e9ca75156d57b47e8ecb85a72dec220221ba2932a3dd88173d5ca0158a" Workload="localhost-k8s-calico--apiserver--57d76c7b76--gmmgs-eth0" Apr 17 03:03:59.902961 containerd[1572]: 2026-04-17 03:03:59.838 [INFO][5287] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e5c027e9ca75156d57b47e8ecb85a72dec220221ba2932a3dd88173d5ca0158a" HandleID="k8s-pod-network.e5c027e9ca75156d57b47e8ecb85a72dec220221ba2932a3dd88173d5ca0158a" Workload="localhost-k8s-calico--apiserver--57d76c7b76--gmmgs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000261930), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-57d76c7b76-gmmgs", "timestamp":"2026-04-17 03:03:59.829542298 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00035d340)} Apr 17 03:03:59.902961 containerd[1572]: 2026-04-17 03:03:59.838 [INFO][5287] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 03:03:59.902961 containerd[1572]: 2026-04-17 03:03:59.838 [INFO][5287] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 03:03:59.902961 containerd[1572]: 2026-04-17 03:03:59.838 [INFO][5287] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 03:03:59.902961 containerd[1572]: 2026-04-17 03:03:59.842 [INFO][5287] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e5c027e9ca75156d57b47e8ecb85a72dec220221ba2932a3dd88173d5ca0158a" host="localhost" Apr 17 03:03:59.902961 containerd[1572]: 2026-04-17 03:03:59.846 [INFO][5287] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 03:03:59.902961 containerd[1572]: 2026-04-17 03:03:59.851 [INFO][5287] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 03:03:59.902961 containerd[1572]: 2026-04-17 03:03:59.854 [INFO][5287] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 03:03:59.902961 containerd[1572]: 2026-04-17 03:03:59.857 [INFO][5287] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 03:03:59.902961 containerd[1572]: 2026-04-17 03:03:59.858 [INFO][5287] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e5c027e9ca75156d57b47e8ecb85a72dec220221ba2932a3dd88173d5ca0158a" host="localhost" Apr 17 03:03:59.902961 containerd[1572]: 2026-04-17 03:03:59.860 [INFO][5287] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e5c027e9ca75156d57b47e8ecb85a72dec220221ba2932a3dd88173d5ca0158a Apr 17 03:03:59.902961 containerd[1572]: 2026-04-17 03:03:59.867 [INFO][5287] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e5c027e9ca75156d57b47e8ecb85a72dec220221ba2932a3dd88173d5ca0158a" host="localhost" Apr 17 03:03:59.902961 containerd[1572]: 2026-04-17 03:03:59.877 [INFO][5287] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.e5c027e9ca75156d57b47e8ecb85a72dec220221ba2932a3dd88173d5ca0158a" host="localhost" Apr 17 03:03:59.902961 containerd[1572]: 2026-04-17 03:03:59.877 [INFO][5287] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.e5c027e9ca75156d57b47e8ecb85a72dec220221ba2932a3dd88173d5ca0158a" host="localhost" Apr 17 03:03:59.902961 containerd[1572]: 2026-04-17 03:03:59.877 [INFO][5287] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 03:03:59.902961 containerd[1572]: 2026-04-17 03:03:59.877 [INFO][5287] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="e5c027e9ca75156d57b47e8ecb85a72dec220221ba2932a3dd88173d5ca0158a" HandleID="k8s-pod-network.e5c027e9ca75156d57b47e8ecb85a72dec220221ba2932a3dd88173d5ca0158a" Workload="localhost-k8s-calico--apiserver--57d76c7b76--gmmgs-eth0" Apr 17 03:03:59.903588 containerd[1572]: 2026-04-17 03:03:59.881 [INFO][5271] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e5c027e9ca75156d57b47e8ecb85a72dec220221ba2932a3dd88173d5ca0158a" Namespace="calico-system" Pod="calico-apiserver-57d76c7b76-gmmgs" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d76c7b76--gmmgs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57d76c7b76--gmmgs-eth0", GenerateName:"calico-apiserver-57d76c7b76-", Namespace:"calico-system", SelfLink:"", UID:"b8e9572e-cd21-4783-81eb-03cb12ebcc87", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 3, 3, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d76c7b76", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-57d76c7b76-gmmgs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali11bc295af90", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 03:03:59.903588 containerd[1572]: 2026-04-17 03:03:59.882 [INFO][5271] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="e5c027e9ca75156d57b47e8ecb85a72dec220221ba2932a3dd88173d5ca0158a" Namespace="calico-system" Pod="calico-apiserver-57d76c7b76-gmmgs" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d76c7b76--gmmgs-eth0" Apr 17 03:03:59.903588 containerd[1572]: 2026-04-17 03:03:59.882 [INFO][5271] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali11bc295af90 ContainerID="e5c027e9ca75156d57b47e8ecb85a72dec220221ba2932a3dd88173d5ca0158a" Namespace="calico-system" Pod="calico-apiserver-57d76c7b76-gmmgs" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d76c7b76--gmmgs-eth0" Apr 17 03:03:59.903588 containerd[1572]: 2026-04-17 03:03:59.884 [INFO][5271] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5c027e9ca75156d57b47e8ecb85a72dec220221ba2932a3dd88173d5ca0158a" Namespace="calico-system" Pod="calico-apiserver-57d76c7b76-gmmgs" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d76c7b76--gmmgs-eth0" Apr 17 03:03:59.903588 containerd[1572]: 2026-04-17 03:03:59.884 [INFO][5271] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e5c027e9ca75156d57b47e8ecb85a72dec220221ba2932a3dd88173d5ca0158a" Namespace="calico-system" Pod="calico-apiserver-57d76c7b76-gmmgs" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d76c7b76--gmmgs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57d76c7b76--gmmgs-eth0", GenerateName:"calico-apiserver-57d76c7b76-", Namespace:"calico-system", SelfLink:"", UID:"b8e9572e-cd21-4783-81eb-03cb12ebcc87", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 3, 3, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d76c7b76", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e5c027e9ca75156d57b47e8ecb85a72dec220221ba2932a3dd88173d5ca0158a", Pod:"calico-apiserver-57d76c7b76-gmmgs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali11bc295af90", MAC:"1a:b1:03:04:39:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 03:03:59.903588 containerd[1572]: 2026-04-17 03:03:59.897 [INFO][5271] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e5c027e9ca75156d57b47e8ecb85a72dec220221ba2932a3dd88173d5ca0158a" Namespace="calico-system" Pod="calico-apiserver-57d76c7b76-gmmgs" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d76c7b76--gmmgs-eth0" Apr 17 03:03:59.919678 kubelet[2725]: E0417 03:03:59.919396 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:59.920646 kubelet[2725]: E0417 03:03:59.919947 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:03:59.941431 containerd[1572]: time="2026-04-17T03:03:59.941365186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:59.942978 containerd[1572]: time="2026-04-17T03:03:59.942938838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 17 03:03:59.943998 containerd[1572]: time="2026-04-17T03:03:59.943955700Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:59.945858 containerd[1572]: time="2026-04-17T03:03:59.945793224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:03:59.946275 containerd[1572]: time="2026-04-17T03:03:59.946240376Z" level=info msg="connecting to shim e5c027e9ca75156d57b47e8ecb85a72dec220221ba2932a3dd88173d5ca0158a" address="unix:///run/containerd/s/16dbaa5d7ebcb48536285c6e938e681c30e7600337bad2d94b63d4b732dbafd7" namespace=k8s.io protocol=ttrpc version=3 Apr 17 03:03:59.946806 containerd[1572]: time="2026-04-17T03:03:59.946685872Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 2.894033742s" Apr 17 03:03:59.946806 containerd[1572]: time="2026-04-17T03:03:59.946716903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 03:03:59.948935 containerd[1572]: time="2026-04-17T03:03:59.948231562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 17 03:03:59.954414 containerd[1572]: time="2026-04-17T03:03:59.954316557Z" level=info msg="CreateContainer within sandbox \"45ad57fdb3de7e1a94c1e9a9f9783576a0600fd90a4ec3845c6239bc3bf02d4d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 03:03:59.968975 containerd[1572]: time="2026-04-17T03:03:59.966349519Z" level=info msg="Container 537fbf34d6325a6c28d13c320a7896138a5c4bdb028bd8598a9fc74565b67102: CDI devices from CRI Config.CDIDevices: []" Apr 17 03:03:59.974687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3256112252.mount: Deactivated successfully. Apr 17 03:03:59.986423 systemd[1]: Started cri-containerd-e5c027e9ca75156d57b47e8ecb85a72dec220221ba2932a3dd88173d5ca0158a.scope - libcontainer container e5c027e9ca75156d57b47e8ecb85a72dec220221ba2932a3dd88173d5ca0158a. Apr 17 03:04:00.022971 containerd[1572]: time="2026-04-17T03:04:00.021560896Z" level=info msg="CreateContainer within sandbox \"45ad57fdb3de7e1a94c1e9a9f9783576a0600fd90a4ec3845c6239bc3bf02d4d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"537fbf34d6325a6c28d13c320a7896138a5c4bdb028bd8598a9fc74565b67102\"" Apr 17 03:04:00.026195 containerd[1572]: time="2026-04-17T03:04:00.025798560Z" level=info msg="StartContainer for \"537fbf34d6325a6c28d13c320a7896138a5c4bdb028bd8598a9fc74565b67102\"" Apr 17 03:04:00.032559 systemd-resolved[1486]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 03:04:00.036881 containerd[1572]: time="2026-04-17T03:04:00.032265460Z" level=info msg="connecting to shim 537fbf34d6325a6c28d13c320a7896138a5c4bdb028bd8598a9fc74565b67102" address="unix:///run/containerd/s/35a3e00c37a9f3ec1d518f85247a2c003dee5008166303fb7a3e5eda332ee898" protocol=ttrpc version=3 Apr 17 03:04:00.069225 systemd[1]: Started cri-containerd-537fbf34d6325a6c28d13c320a7896138a5c4bdb028bd8598a9fc74565b67102.scope - libcontainer container 537fbf34d6325a6c28d13c320a7896138a5c4bdb028bd8598a9fc74565b67102. Apr 17 03:04:00.091004 containerd[1572]: time="2026-04-17T03:04:00.090936427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d76c7b76-gmmgs,Uid:b8e9572e-cd21-4783-81eb-03cb12ebcc87,Namespace:calico-system,Attempt:0,} returns sandbox id \"e5c027e9ca75156d57b47e8ecb85a72dec220221ba2932a3dd88173d5ca0158a\"" Apr 17 03:04:00.099136 containerd[1572]: time="2026-04-17T03:04:00.098978547Z" level=info msg="CreateContainer within sandbox \"e5c027e9ca75156d57b47e8ecb85a72dec220221ba2932a3dd88173d5ca0158a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 03:04:00.112831 containerd[1572]: time="2026-04-17T03:04:00.112081574Z" level=info msg="Container de59a462f78837f9e7a1acbe695c0a6e41a80c579c8c9e166d3ff9df449be7c6: CDI devices from CRI Config.CDIDevices: []" Apr 17 03:04:00.125558 containerd[1572]: time="2026-04-17T03:04:00.125512313Z" level=info msg="CreateContainer within sandbox \"e5c027e9ca75156d57b47e8ecb85a72dec220221ba2932a3dd88173d5ca0158a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"de59a462f78837f9e7a1acbe695c0a6e41a80c579c8c9e166d3ff9df449be7c6\"" Apr 17 03:04:00.127263 containerd[1572]: time="2026-04-17T03:04:00.127200129Z" level=info msg="StartContainer for \"de59a462f78837f9e7a1acbe695c0a6e41a80c579c8c9e166d3ff9df449be7c6\"" Apr 17 03:04:00.128404 containerd[1572]: time="2026-04-17T03:04:00.128380415Z" level=info msg="connecting to shim de59a462f78837f9e7a1acbe695c0a6e41a80c579c8c9e166d3ff9df449be7c6" address="unix:///run/containerd/s/16dbaa5d7ebcb48536285c6e938e681c30e7600337bad2d94b63d4b732dbafd7" protocol=ttrpc version=3 Apr 17 03:04:00.134628 containerd[1572]: time="2026-04-17T03:04:00.134583326Z" level=info msg="StartContainer for \"537fbf34d6325a6c28d13c320a7896138a5c4bdb028bd8598a9fc74565b67102\" returns successfully" Apr 17 03:04:00.151090 systemd[1]: Started cri-containerd-de59a462f78837f9e7a1acbe695c0a6e41a80c579c8c9e166d3ff9df449be7c6.scope - libcontainer container de59a462f78837f9e7a1acbe695c0a6e41a80c579c8c9e166d3ff9df449be7c6. Apr 17 03:04:00.201663 containerd[1572]: time="2026-04-17T03:04:00.201608190Z" level=info msg="StartContainer for \"de59a462f78837f9e7a1acbe695c0a6e41a80c579c8c9e166d3ff9df449be7c6\" returns successfully" Apr 17 03:04:00.405200 systemd[1]: Started sshd@9-10.0.0.7:22-10.0.0.1:35204.service - OpenSSH per-connection server daemon (10.0.0.1:35204). Apr 17 03:04:00.475333 sshd[5455]: Accepted publickey for core from 10.0.0.1 port 35204 ssh2: RSA SHA256:FVrkeUr4F1DUvuGbghPLjRpHgCWbfVIbP6ixe+jkRU8 Apr 17 03:04:00.477680 sshd-session[5455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 03:04:00.482125 systemd-logind[1551]: New session 10 of user core. Apr 17 03:04:00.488202 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 03:04:00.608991 sshd[5460]: Connection closed by 10.0.0.1 port 35204 Apr 17 03:04:00.609310 sshd-session[5455]: pam_unix(sshd:session): session closed for user core Apr 17 03:04:00.613080 systemd[1]: sshd@9-10.0.0.7:22-10.0.0.1:35204.service: Deactivated successfully. Apr 17 03:04:00.613330 systemd-logind[1551]: Session 10 logged out. Waiting for processes to exit. Apr 17 03:04:00.615353 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 03:04:00.617026 systemd-logind[1551]: Removed session 10. Apr 17 03:04:00.932032 kubelet[2725]: E0417 03:04:00.931952 2725 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 03:04:00.958084 kubelet[2725]: I0417 03:04:00.958040 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-57d76c7b76-gmmgs" podStartSLOduration=32.958009666 podStartE2EDuration="32.958009666s" podCreationTimestamp="2026-04-17 03:03:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 03:04:00.956408707 +0000 UTC m=+48.278426642" watchObservedRunningTime="2026-04-17 03:04:00.958009666 +0000 UTC m=+48.280027605" Apr 17 03:04:00.958762 kubelet[2725]: I0417 03:04:00.958712 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-57d76c7b76-rl429" podStartSLOduration=28.083918111 podStartE2EDuration="32.958686007s" podCreationTimestamp="2026-04-17 03:03:28 +0000 UTC" firstStartedPulling="2026-04-17 03:03:55.073090203 +0000 UTC m=+42.395108134" lastFinishedPulling="2026-04-17 03:03:59.947858096 +0000 UTC m=+47.269876030" observedRunningTime="2026-04-17 03:04:00.946231406 +0000 UTC m=+48.268249349" watchObservedRunningTime="2026-04-17 03:04:00.958686007 +0000 UTC m=+48.280703939" Apr 17 03:04:01.643218 systemd-networkd[1483]: cali11bc295af90: Gained IPv6LL Apr 17 03:04:03.618633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2988556334.mount: Deactivated successfully. Apr 17 03:04:03.895862 containerd[1572]: time="2026-04-17T03:04:03.895725358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:04:03.896474 containerd[1572]: time="2026-04-17T03:04:03.896445020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 17 03:04:03.897432 containerd[1572]: time="2026-04-17T03:04:03.897396890Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:04:03.899444 containerd[1572]: time="2026-04-17T03:04:03.899395996Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 03:04:03.899995 containerd[1572]: time="2026-04-17T03:04:03.899956389Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.951669779s" Apr 17 03:04:03.899995 containerd[1572]: time="2026-04-17T03:04:03.899990328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 17 03:04:03.903442 containerd[1572]: time="2026-04-17T03:04:03.903370093Z" level=info msg="CreateContainer within sandbox \"f4e99b276e51711b7a2e466b2a75d0f6fc75a48923476c97efc03a4c3fb8e9ca\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 17 03:04:03.909483 containerd[1572]: time="2026-04-17T03:04:03.909436929Z" level=info msg="Container dfcd499f6d2b24295c4d49e0e664cc9e8c044aa3f158dac460cf12359839158a: CDI devices from CRI Config.CDIDevices: []" Apr 17 03:04:03.916621 containerd[1572]: time="2026-04-17T03:04:03.916569707Z" level=info msg="CreateContainer within sandbox \"f4e99b276e51711b7a2e466b2a75d0f6fc75a48923476c97efc03a4c3fb8e9ca\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"dfcd499f6d2b24295c4d49e0e664cc9e8c044aa3f158dac460cf12359839158a\"" Apr 17 03:04:03.917232 containerd[1572]: time="2026-04-17T03:04:03.917207961Z" level=info msg="StartContainer for \"dfcd499f6d2b24295c4d49e0e664cc9e8c044aa3f158dac460cf12359839158a\"" Apr 17 03:04:03.918188 containerd[1572]: time="2026-04-17T03:04:03.918146911Z" level=info msg="connecting to shim dfcd499f6d2b24295c4d49e0e664cc9e8c044aa3f158dac460cf12359839158a" address="unix:///run/containerd/s/e50a7f8026035a246af46629da34bf9ee7d56190ccf3f0feec739c71de508cea" protocol=ttrpc version=3 Apr 17 03:04:03.979103 systemd[1]: Started cri-containerd-dfcd499f6d2b24295c4d49e0e664cc9e8c044aa3f158dac460cf12359839158a.scope - libcontainer container dfcd499f6d2b24295c4d49e0e664cc9e8c044aa3f158dac460cf12359839158a. Apr 17 03:04:04.022067 containerd[1572]: time="2026-04-17T03:04:04.022027063Z" level=info msg="StartContainer for \"dfcd499f6d2b24295c4d49e0e664cc9e8c044aa3f158dac460cf12359839158a\" returns successfully" Apr 17 03:04:04.970362 kubelet[2725]: I0417 03:04:04.970092 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-5h9fm" podStartSLOduration=31.063353427 podStartE2EDuration="36.970035886s" podCreationTimestamp="2026-04-17 03:03:28 +0000 UTC" firstStartedPulling="2026-04-17 03:03:57.993895202 +0000 UTC m=+45.315913134" lastFinishedPulling="2026-04-17 03:04:03.900577661 +0000 UTC m=+51.222595593" observedRunningTime="2026-04-17 03:04:04.968703003 +0000 UTC m=+52.290720946" watchObservedRunningTime="2026-04-17 03:04:04.970035886 +0000 UTC m=+52.292053833" Apr 17 03:04:05.620836 systemd[1]: Started sshd@10-10.0.0.7:22-10.0.0.1:35210.service - OpenSSH per-connection server daemon (10.0.0.1:35210). Apr 17 03:04:05.700807 sshd[5594]: Accepted publickey for core from 10.0.0.1 port 35210 ssh2: RSA SHA256:FVrkeUr4F1DUvuGbghPLjRpHgCWbfVIbP6ixe+jkRU8 Apr 17 03:04:05.702140 sshd-session[5594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 03:04:05.706656 systemd-logind[1551]: New session 11 of user core. Apr 17 03:04:05.718276 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 03:04:05.800148 sshd[5597]: Connection closed by 10.0.0.1 port 35210 Apr 17 03:04:05.800605 sshd-session[5594]: pam_unix(sshd:session): session closed for user core Apr 17 03:04:05.812549 systemd[1]: sshd@10-10.0.0.7:22-10.0.0.1:35210.service: Deactivated successfully. Apr 17 03:04:05.814649 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 03:04:05.815655 systemd-logind[1551]: Session 11 logged out. Waiting for processes to exit. Apr 17 03:04:05.818165 systemd[1]: Started sshd@11-10.0.0.7:22-10.0.0.1:35218.service - OpenSSH per-connection server daemon (10.0.0.1:35218). Apr 17 03:04:05.818729 systemd-logind[1551]: Removed session 11. Apr 17 03:04:05.863653 sshd[5612]: Accepted publickey for core from 10.0.0.1 port 35218 ssh2: RSA SHA256:FVrkeUr4F1DUvuGbghPLjRpHgCWbfVIbP6ixe+jkRU8 Apr 17 03:04:05.864653 sshd-session[5612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 03:04:05.868717 systemd-logind[1551]: New session 12 of user core. Apr 17 03:04:05.877093 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 03:04:05.986838 sshd[5615]: Connection closed by 10.0.0.1 port 35218 Apr 17 03:04:05.985686 sshd-session[5612]: pam_unix(sshd:session): session closed for user core Apr 17 03:04:05.992439 systemd[1]: sshd@11-10.0.0.7:22-10.0.0.1:35218.service: Deactivated successfully. Apr 17 03:04:05.995579 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 03:04:05.999730 systemd-logind[1551]: Session 12 logged out. Waiting for processes to exit. Apr 17 03:04:06.004215 systemd[1]: Started sshd@12-10.0.0.7:22-10.0.0.1:35224.service - OpenSSH per-connection server daemon (10.0.0.1:35224). Apr 17 03:04:06.006183 systemd-logind[1551]: Removed session 12. Apr 17 03:04:06.055361 sshd[5647]: Accepted publickey for core from 10.0.0.1 port 35224 ssh2: RSA SHA256:FVrkeUr4F1DUvuGbghPLjRpHgCWbfVIbP6ixe+jkRU8 Apr 17 03:04:06.056835 sshd-session[5647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 03:04:06.061148 systemd-logind[1551]: New session 13 of user core. Apr 17 03:04:06.075376 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 03:04:06.156435 sshd[5655]: Connection closed by 10.0.0.1 port 35224 Apr 17 03:04:06.156775 sshd-session[5647]: pam_unix(sshd:session): session closed for user core Apr 17 03:04:06.159982 systemd[1]: sshd@12-10.0.0.7:22-10.0.0.1:35224.service: Deactivated successfully. Apr 17 03:04:06.161348 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 03:04:06.162080 systemd-logind[1551]: Session 13 logged out. Waiting for processes to exit. Apr 17 03:04:06.162982 systemd-logind[1551]: Removed session 13. Apr 17 03:04:11.169358 systemd[1]: Started sshd@13-10.0.0.7:22-10.0.0.1:40432.service - OpenSSH per-connection server daemon (10.0.0.1:40432). Apr 17 03:04:11.214594 sshd[5685]: Accepted publickey for core from 10.0.0.1 port 40432 ssh2: RSA SHA256:FVrkeUr4F1DUvuGbghPLjRpHgCWbfVIbP6ixe+jkRU8 Apr 17 03:04:11.215792 sshd-session[5685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 03:04:11.221041 systemd-logind[1551]: New session 14 of user core. Apr 17 03:04:11.230200 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 03:04:11.292008 sshd[5688]: Connection closed by 10.0.0.1 port 40432 Apr 17 03:04:11.292567 sshd-session[5685]: pam_unix(sshd:session): session closed for user core Apr 17 03:04:11.301761 systemd[1]: sshd@13-10.0.0.7:22-10.0.0.1:40432.service: Deactivated successfully. Apr 17 03:04:11.303109 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 03:04:11.303705 systemd-logind[1551]: Session 14 logged out. Waiting for processes to exit. Apr 17 03:04:11.305545 systemd[1]: Started sshd@14-10.0.0.7:22-10.0.0.1:40438.service - OpenSSH per-connection server daemon (10.0.0.1:40438). Apr 17 03:04:11.306371 systemd-logind[1551]: Removed session 14. Apr 17 03:04:11.352093 sshd[5701]: Accepted publickey for core from 10.0.0.1 port 40438 ssh2: RSA SHA256:FVrkeUr4F1DUvuGbghPLjRpHgCWbfVIbP6ixe+jkRU8 Apr 17 03:04:11.353326 sshd-session[5701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 03:04:11.361255 systemd-logind[1551]: New session 15 of user core. Apr 17 03:04:11.374208 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 03:04:11.471553 kernel: hrtimer: interrupt took 3480146 ns Apr 17 03:04:11.627687 sshd[5704]: Connection closed by 10.0.0.1 port 40438 Apr 17 03:04:11.628651 sshd-session[5701]: pam_unix(sshd:session): session closed for user core Apr 17 03:04:11.636060 systemd[1]: sshd@14-10.0.0.7:22-10.0.0.1:40438.service: Deactivated successfully. Apr 17 03:04:11.637844 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 03:04:11.638513 systemd-logind[1551]: Session 15 logged out. Waiting for processes to exit. Apr 17 03:04:11.640628 systemd[1]: Started sshd@15-10.0.0.7:22-10.0.0.1:40442.service - OpenSSH per-connection server daemon (10.0.0.1:40442). Apr 17 03:04:11.641109 systemd-logind[1551]: Removed session 15. Apr 17 03:04:11.699033 sshd[5715]: Accepted publickey for core from 10.0.0.1 port 40442 ssh2: RSA SHA256:FVrkeUr4F1DUvuGbghPLjRpHgCWbfVIbP6ixe+jkRU8 Apr 17 03:04:11.700037 sshd-session[5715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 03:04:11.704561 systemd-logind[1551]: New session 16 of user core. Apr 17 03:04:11.711211 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 03:04:12.102325 sshd[5718]: Connection closed by 10.0.0.1 port 40442 Apr 17 03:04:12.103481 sshd-session[5715]: pam_unix(sshd:session): session closed for user core Apr 17 03:04:12.111501 systemd[1]: sshd@15-10.0.0.7:22-10.0.0.1:40442.service: Deactivated successfully. Apr 17 03:04:12.113726 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 03:04:12.117025 systemd-logind[1551]: Session 16 logged out. Waiting for processes to exit. Apr 17 03:04:12.121342 systemd[1]: Started sshd@16-10.0.0.7:22-10.0.0.1:40448.service - OpenSSH per-connection server daemon (10.0.0.1:40448). Apr 17 03:04:12.125840 systemd-logind[1551]: Removed session 16. Apr 17 03:04:12.179443 sshd[5738]: Accepted publickey for core from 10.0.0.1 port 40448 ssh2: RSA SHA256:FVrkeUr4F1DUvuGbghPLjRpHgCWbfVIbP6ixe+jkRU8 Apr 17 03:04:12.180354 sshd-session[5738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 03:04:12.183961 systemd-logind[1551]: New session 17 of user core. Apr 17 03:04:12.191054 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 03:04:12.365514 sshd[5746]: Connection closed by 10.0.0.1 port 40448 Apr 17 03:04:12.366132 sshd-session[5738]: pam_unix(sshd:session): session closed for user core Apr 17 03:04:12.377994 systemd[1]: sshd@16-10.0.0.7:22-10.0.0.1:40448.service: Deactivated successfully. Apr 17 03:04:12.380703 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 03:04:12.381357 systemd-logind[1551]: Session 17 logged out. Waiting for processes to exit. Apr 17 03:04:12.383578 systemd[1]: Started sshd@17-10.0.0.7:22-10.0.0.1:40452.service - OpenSSH per-connection server daemon (10.0.0.1:40452). Apr 17 03:04:12.384038 systemd-logind[1551]: Removed session 17. Apr 17 03:04:12.429352 sshd[5757]: Accepted publickey for core from 10.0.0.1 port 40452 ssh2: RSA SHA256:FVrkeUr4F1DUvuGbghPLjRpHgCWbfVIbP6ixe+jkRU8 Apr 17 03:04:12.430414 sshd-session[5757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 03:04:12.435002 systemd-logind[1551]: New session 18 of user core. Apr 17 03:04:12.441231 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 03:04:12.506614 sshd[5760]: Connection closed by 10.0.0.1 port 40452 Apr 17 03:04:12.507029 sshd-session[5757]: pam_unix(sshd:session): session closed for user core Apr 17 03:04:12.511069 systemd[1]: sshd@17-10.0.0.7:22-10.0.0.1:40452.service: Deactivated successfully. Apr 17 03:04:12.512743 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 03:04:12.514007 systemd-logind[1551]: Session 18 logged out. Waiting for processes to exit. Apr 17 03:04:12.515847 systemd-logind[1551]: Removed session 18. Apr 17 03:04:17.520472 systemd[1]: Started sshd@18-10.0.0.7:22-10.0.0.1:40456.service - OpenSSH per-connection server daemon (10.0.0.1:40456). Apr 17 03:04:17.571201 sshd[5781]: Accepted publickey for core from 10.0.0.1 port 40456 ssh2: RSA SHA256:FVrkeUr4F1DUvuGbghPLjRpHgCWbfVIbP6ixe+jkRU8 Apr 17 03:04:17.572261 sshd-session[5781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 03:04:17.577324 systemd-logind[1551]: New session 19 of user core. Apr 17 03:04:17.585513 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 03:04:17.663446 sshd[5784]: Connection closed by 10.0.0.1 port 40456 Apr 17 03:04:17.663816 sshd-session[5781]: pam_unix(sshd:session): session closed for user core Apr 17 03:04:17.667390 systemd[1]: sshd@18-10.0.0.7:22-10.0.0.1:40456.service: Deactivated successfully. Apr 17 03:04:17.668999 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 03:04:17.669610 systemd-logind[1551]: Session 19 logged out. Waiting for processes to exit. Apr 17 03:04:17.671187 systemd-logind[1551]: Removed session 19. Apr 17 03:04:22.675603 systemd[1]: Started sshd@19-10.0.0.7:22-10.0.0.1:35854.service - OpenSSH per-connection server daemon (10.0.0.1:35854). Apr 17 03:04:22.728738 sshd[5815]: Accepted publickey for core from 10.0.0.1 port 35854 ssh2: RSA SHA256:FVrkeUr4F1DUvuGbghPLjRpHgCWbfVIbP6ixe+jkRU8 Apr 17 03:04:22.729776 sshd-session[5815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 03:04:22.734021 systemd-logind[1551]: New session 20 of user core. Apr 17 03:04:22.740197 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 03:04:22.811165 sshd[5818]: Connection closed by 10.0.0.1 port 35854 Apr 17 03:04:22.811514 sshd-session[5815]: pam_unix(sshd:session): session closed for user core Apr 17 03:04:22.814582 systemd[1]: sshd@19-10.0.0.7:22-10.0.0.1:35854.service: Deactivated successfully. Apr 17 03:04:22.816049 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 03:04:22.816713 systemd-logind[1551]: Session 20 logged out. Waiting for processes to exit. Apr 17 03:04:22.817962 systemd-logind[1551]: Removed session 20.