Apr 16 02:32:53.828352 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Apr 15 22:39:17 -00 2026 Apr 16 02:32:53.828379 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=15b40c09f238fba45b5bb3e18ef7e289d4e557e0500075f5731dd7eaa53962ae Apr 16 02:32:53.828392 kernel: BIOS-provided physical RAM map: Apr 16 02:32:53.828399 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 16 02:32:53.828404 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 16 02:32:53.828410 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 16 02:32:53.828416 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 16 02:32:53.828422 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 16 02:32:53.828428 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 16 02:32:53.828454 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 16 02:32:53.828463 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 16 02:32:53.828470 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 16 02:32:53.828475 kernel: NX (Execute Disable) protection: active Apr 16 02:32:53.828481 kernel: APIC: Static calls initialized Apr 16 02:32:53.828488 kernel: SMBIOS 2.8 present. Apr 16 02:32:53.828493 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 16 02:32:53.828501 kernel: DMI: Memory slots populated: 1/1 Apr 16 02:32:53.828508 kernel: Hypervisor detected: KVM Apr 16 02:32:53.828513 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 16 02:32:53.828518 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 16 02:32:53.828522 kernel: kvm-clock: using sched offset of 6932055696 cycles Apr 16 02:32:53.828528 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 16 02:32:53.828533 kernel: tsc: Detected 2793.438 MHz processor Apr 16 02:32:53.828538 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 16 02:32:53.828544 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 16 02:32:53.828549 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 16 02:32:53.828555 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 16 02:32:53.828560 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 16 02:32:53.828565 kernel: Using GB pages for direct mapping Apr 16 02:32:53.828570 kernel: ACPI: Early table checksum verification disabled Apr 16 02:32:53.828575 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 16 02:32:53.828580 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 02:32:53.828585 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 02:32:53.828590 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 02:32:53.828595 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 16 02:32:53.828601 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 02:32:53.828606 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 02:32:53.828614 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 02:32:53.828622 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 02:32:53.828630 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 16 02:32:53.828640 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 16 02:32:53.828646 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 16 02:32:53.828651 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 16 02:32:53.828656 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 16 02:32:53.828662 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 16 02:32:53.828667 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 16 02:32:53.828672 kernel: No NUMA configuration found Apr 16 02:32:53.828677 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 16 02:32:53.828682 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Apr 16 02:32:53.828689 kernel: Zone ranges: Apr 16 02:32:53.828694 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 16 02:32:53.828699 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 16 02:32:53.828705 kernel: Normal empty Apr 16 02:32:53.828710 kernel: Device empty Apr 16 02:32:53.828715 kernel: Movable zone start for each node Apr 16 02:32:53.828720 kernel: Early memory node ranges Apr 16 02:32:53.828726 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 16 02:32:53.828731 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 16 02:32:53.828738 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 16 02:32:53.828743 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 16 02:32:53.828748 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 16 02:32:53.828753 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 16 02:32:53.828758 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 16 02:32:53.828764 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 16 02:32:53.828769 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 16 02:32:53.828774 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 16 02:32:53.828779 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 16 02:32:53.828786 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 16 02:32:53.828791 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 16 02:32:53.828796 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 16 02:32:53.828801 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 16 02:32:53.828806 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 16 02:32:53.828812 kernel: TSC deadline timer available Apr 16 02:32:53.828817 kernel: CPU topo: Max. logical packages: 1 Apr 16 02:32:53.828822 kernel: CPU topo: Max. logical dies: 1 Apr 16 02:32:53.828827 kernel: CPU topo: Max. dies per package: 1 Apr 16 02:32:53.828832 kernel: CPU topo: Max. threads per core: 1 Apr 16 02:32:53.828838 kernel: CPU topo: Num. cores per package: 4 Apr 16 02:32:53.828843 kernel: CPU topo: Num. threads per package: 4 Apr 16 02:32:53.828849 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 16 02:32:53.828854 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 16 02:32:53.828859 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 16 02:32:53.828864 kernel: kvm-guest: setup PV sched yield Apr 16 02:32:53.828870 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 16 02:32:53.828875 kernel: Booting paravirtualized kernel on KVM Apr 16 02:32:53.828880 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 16 02:32:53.828887 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 16 02:32:53.828892 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u524288 Apr 16 02:32:53.828897 kernel: pcpu-alloc: s207448 r8192 d30120 u524288 alloc=1*2097152 Apr 16 02:32:53.828902 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 16 02:32:53.828907 kernel: kvm-guest: PV spinlocks enabled Apr 16 02:32:53.828912 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 16 02:32:53.828918 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=15b40c09f238fba45b5bb3e18ef7e289d4e557e0500075f5731dd7eaa53962ae Apr 16 02:32:53.828924 kernel: random: crng init done Apr 16 02:32:53.828930 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 16 02:32:53.828936 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 16 02:32:53.828941 kernel: Fallback order for Node 0: 0 Apr 16 02:32:53.828946 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Apr 16 02:32:53.828951 kernel: Policy zone: DMA32 Apr 16 02:32:53.828956 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 16 02:32:53.828962 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 16 02:32:53.828967 kernel: ftrace: allocating 40126 entries in 157 pages Apr 16 02:32:53.828972 kernel: ftrace: allocated 157 pages with 5 groups Apr 16 02:32:53.828979 kernel: Dynamic Preempt: voluntary Apr 16 02:32:53.828984 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 16 02:32:53.829097 kernel: rcu: RCU event tracing is enabled. Apr 16 02:32:53.829116 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 16 02:32:53.829122 kernel: Trampoline variant of Tasks RCU enabled. Apr 16 02:32:53.829127 kernel: Rude variant of Tasks RCU enabled. Apr 16 02:32:53.829132 kernel: Tracing variant of Tasks RCU enabled. Apr 16 02:32:53.829137 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 16 02:32:53.829143 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 16 02:32:53.829152 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 02:32:53.829159 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 02:32:53.829168 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 02:32:53.829189 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 16 02:32:53.829195 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 16 02:32:53.829200 kernel: Console: colour VGA+ 80x25 Apr 16 02:32:53.829240 kernel: printk: legacy console [ttyS0] enabled Apr 16 02:32:53.829249 kernel: ACPI: Core revision 20240827 Apr 16 02:32:53.829255 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 16 02:32:53.829261 kernel: APIC: Switch to symmetric I/O mode setup Apr 16 02:32:53.829267 kernel: x2apic enabled Apr 16 02:32:53.829272 kernel: APIC: Switched APIC routing to: physical x2apic Apr 16 02:32:53.829279 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 16 02:32:53.829285 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 16 02:32:53.829291 kernel: kvm-guest: setup PV IPIs Apr 16 02:32:53.829297 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 16 02:32:53.829302 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 02:32:53.829310 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 16 02:32:53.829316 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 16 02:32:53.829321 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 16 02:32:53.829329 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 16 02:32:53.829339 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 16 02:32:53.829347 kernel: Spectre V2 : Mitigation: Retpolines Apr 16 02:32:53.829356 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 16 02:32:53.829366 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 16 02:32:53.829379 kernel: RETBleed: Vulnerable Apr 16 02:32:53.829385 kernel: Speculative Store Bypass: Vulnerable Apr 16 02:32:53.829391 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 16 02:32:53.829396 kernel: GDS: Unknown: Dependent on hypervisor status Apr 16 02:32:53.829402 kernel: active return thunk: its_return_thunk Apr 16 02:32:53.829408 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 16 02:32:53.829413 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 16 02:32:53.829419 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 16 02:32:53.829425 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 16 02:32:53.829450 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 16 02:32:53.829459 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 16 02:32:53.829469 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 16 02:32:53.829478 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 16 02:32:53.829484 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 16 02:32:53.829489 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 16 02:32:53.829495 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 16 02:32:53.829501 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 16 02:32:53.829506 kernel: Freeing SMP alternatives memory: 32K Apr 16 02:32:53.829513 kernel: pid_max: default: 32768 minimum: 301 Apr 16 02:32:53.829519 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 16 02:32:53.829525 kernel: landlock: Up and running. Apr 16 02:32:53.829530 kernel: SELinux: Initializing. Apr 16 02:32:53.829536 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 02:32:53.829541 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 02:32:53.829547 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 16 02:32:53.829553 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 16 02:32:53.829558 kernel: signal: max sigframe size: 3632 Apr 16 02:32:53.829566 kernel: rcu: Hierarchical SRCU implementation. Apr 16 02:32:53.829571 kernel: rcu: Max phase no-delay instances is 400. Apr 16 02:32:53.829577 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 16 02:32:53.829583 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 16 02:32:53.829589 kernel: smp: Bringing up secondary CPUs ... Apr 16 02:32:53.829594 kernel: smpboot: x86: Booting SMP configuration: Apr 16 02:32:53.829600 kernel: .... node #0, CPUs: #1 #2 #3 Apr 16 02:32:53.829606 kernel: smp: Brought up 1 node, 4 CPUs Apr 16 02:32:53.829611 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 16 02:32:53.829619 kernel: Memory: 2419756K/2571752K available (14336K kernel code, 2453K rwdata, 26076K rodata, 46224K init, 2524K bss, 146108K reserved, 0K cma-reserved) Apr 16 02:32:53.829624 kernel: devtmpfs: initialized Apr 16 02:32:53.829630 kernel: x86/mm: Memory block size: 128MB Apr 16 02:32:53.829636 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 16 02:32:53.829642 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 16 02:32:53.829647 kernel: pinctrl core: initialized pinctrl subsystem Apr 16 02:32:53.829653 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 16 02:32:53.829659 kernel: audit: initializing netlink subsys (disabled) Apr 16 02:32:53.829664 kernel: audit: type=2000 audit(1776306769.138:1): state=initialized audit_enabled=0 res=1 Apr 16 02:32:53.829671 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 16 02:32:53.829677 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 16 02:32:53.829683 kernel: cpuidle: using governor menu Apr 16 02:32:53.829689 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 16 02:32:53.829694 kernel: dca service started, version 1.12.1 Apr 16 02:32:53.829700 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Apr 16 02:32:53.829705 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 16 02:32:53.829711 kernel: PCI: Using configuration type 1 for base access Apr 16 02:32:53.829717 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 16 02:32:53.829724 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 16 02:32:53.829730 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 16 02:32:53.829736 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 16 02:32:53.829741 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 16 02:32:53.829747 kernel: ACPI: Added _OSI(Module Device) Apr 16 02:32:53.829752 kernel: ACPI: Added _OSI(Processor Device) Apr 16 02:32:53.829758 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 16 02:32:53.829764 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 16 02:32:53.829769 kernel: ACPI: Interpreter enabled Apr 16 02:32:53.829776 kernel: ACPI: PM: (supports S0 S3 S5) Apr 16 02:32:53.829782 kernel: ACPI: Using IOAPIC for interrupt routing Apr 16 02:32:53.829788 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 16 02:32:53.829793 kernel: PCI: Using E820 reservations for host bridge windows Apr 16 02:32:53.829799 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 16 02:32:53.829805 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 16 02:32:53.829930 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 16 02:32:53.829989 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 16 02:32:53.830045 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 16 02:32:53.830052 kernel: PCI host bridge to bus 0000:00 Apr 16 02:32:53.830109 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 16 02:32:53.830157 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 16 02:32:53.830204 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 16 02:32:53.830370 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 16 02:32:53.830453 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 16 02:32:53.830511 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 16 02:32:53.830559 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 16 02:32:53.830630 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 16 02:32:53.830692 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 16 02:32:53.830746 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Apr 16 02:32:53.830798 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Apr 16 02:32:53.830852 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Apr 16 02:32:53.830904 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 16 02:32:53.830963 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 16 02:32:53.831017 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Apr 16 02:32:53.831070 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Apr 16 02:32:53.831142 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Apr 16 02:32:53.831307 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 16 02:32:53.831390 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Apr 16 02:32:53.831544 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Apr 16 02:32:53.831619 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Apr 16 02:32:53.831713 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 16 02:32:53.831788 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Apr 16 02:32:53.831873 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Apr 16 02:32:53.831951 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 16 02:32:53.832021 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Apr 16 02:32:53.832101 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 16 02:32:53.832175 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 16 02:32:53.834418 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 16 02:32:53.834600 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Apr 16 02:32:53.834661 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Apr 16 02:32:53.834755 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 16 02:32:53.834811 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Apr 16 02:32:53.834818 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 16 02:32:53.834825 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 16 02:32:53.834830 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 16 02:32:53.834836 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 16 02:32:53.834842 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 16 02:32:53.834848 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 16 02:32:53.834856 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 16 02:32:53.834862 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 16 02:32:53.834868 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 16 02:32:53.834873 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 16 02:32:53.834879 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 16 02:32:53.834885 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 16 02:32:53.834891 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 16 02:32:53.834897 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 16 02:32:53.834902 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 16 02:32:53.834909 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 16 02:32:53.834915 kernel: iommu: Default domain type: Translated Apr 16 02:32:53.834921 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 16 02:32:53.834926 kernel: PCI: Using ACPI for IRQ routing Apr 16 02:32:53.834932 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 16 02:32:53.834938 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 16 02:32:53.834944 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 16 02:32:53.834998 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 16 02:32:53.835050 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 16 02:32:53.835104 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 16 02:32:53.835111 kernel: vgaarb: loaded Apr 16 02:32:53.835117 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 16 02:32:53.835123 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 16 02:32:53.835129 kernel: clocksource: Switched to clocksource kvm-clock Apr 16 02:32:53.835135 kernel: VFS: Disk quotas dquot_6.6.0 Apr 16 02:32:53.835141 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 16 02:32:53.835146 kernel: pnp: PnP ACPI init Apr 16 02:32:53.835205 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 16 02:32:53.835316 kernel: pnp: PnP ACPI: found 6 devices Apr 16 02:32:53.835323 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 16 02:32:53.835328 kernel: NET: Registered PF_INET protocol family Apr 16 02:32:53.835334 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 16 02:32:53.835340 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 16 02:32:53.835346 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 16 02:32:53.835352 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 16 02:32:53.835358 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 16 02:32:53.835367 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 16 02:32:53.835373 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 02:32:53.835378 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 02:32:53.835384 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 16 02:32:53.835390 kernel: NET: Registered PF_XDP protocol family Apr 16 02:32:53.835506 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 16 02:32:53.835557 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 16 02:32:53.835611 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 16 02:32:53.835660 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 16 02:32:53.835707 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 16 02:32:53.835755 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 16 02:32:53.835762 kernel: PCI: CLS 0 bytes, default 64 Apr 16 02:32:53.835768 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 16 02:32:53.835774 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 02:32:53.835780 kernel: Initialise system trusted keyrings Apr 16 02:32:53.835786 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 16 02:32:53.835792 kernel: hrtimer: interrupt took 2520523 ns Apr 16 02:32:53.835800 kernel: Key type asymmetric registered Apr 16 02:32:53.835806 kernel: Asymmetric key parser 'x509' registered Apr 16 02:32:53.835812 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 16 02:32:53.835817 kernel: io scheduler mq-deadline registered Apr 16 02:32:53.835823 kernel: io scheduler kyber registered Apr 16 02:32:53.835829 kernel: io scheduler bfq registered Apr 16 02:32:53.835835 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 16 02:32:53.835841 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 16 02:32:53.835847 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 16 02:32:53.835855 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 16 02:32:53.835860 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 16 02:32:53.835866 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 16 02:32:53.835872 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 16 02:32:53.835878 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 16 02:32:53.835884 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 16 02:32:53.835941 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 16 02:32:53.835950 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 16 02:32:53.835999 kernel: rtc_cmos 00:04: registered as rtc0 Apr 16 02:32:53.836047 kernel: rtc_cmos 00:04: setting system clock to 2026-04-16T02:32:53 UTC (1776306773) Apr 16 02:32:53.836096 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 16 02:32:53.836103 kernel: intel_pstate: CPU model not supported Apr 16 02:32:53.836108 kernel: NET: Registered PF_INET6 protocol family Apr 16 02:32:53.836114 kernel: Segment Routing with IPv6 Apr 16 02:32:53.836120 kernel: In-situ OAM (IOAM) with IPv6 Apr 16 02:32:53.836126 kernel: NET: Registered PF_PACKET protocol family Apr 16 02:32:53.836131 kernel: Key type dns_resolver registered Apr 16 02:32:53.836138 kernel: IPI shorthand broadcast: enabled Apr 16 02:32:53.836144 kernel: sched_clock: Marking stable (3751009472, 462771133)->(4401450557, -187669952) Apr 16 02:32:53.836150 kernel: registered taskstats version 1 Apr 16 02:32:53.836156 kernel: Loading compiled-in X.509 certificates Apr 16 02:32:53.836162 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 25c2b596b475a2918f2ba6f953b0a89c09a0d0ab' Apr 16 02:32:53.836167 kernel: Demotion targets for Node 0: null Apr 16 02:32:53.836173 kernel: Key type .fscrypt registered Apr 16 02:32:53.836178 kernel: Key type fscrypt-provisioning registered Apr 16 02:32:53.836184 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 16 02:32:53.836191 kernel: ima: Allocated hash algorithm: sha1 Apr 16 02:32:53.836197 kernel: ima: No architecture policies found Apr 16 02:32:53.836202 kernel: clk: Disabling unused clocks Apr 16 02:32:53.836208 kernel: Warning: unable to open an initial console. Apr 16 02:32:53.836240 kernel: Freeing unused kernel image (initmem) memory: 46224K Apr 16 02:32:53.836246 kernel: Write protecting the kernel read-only data: 40960k Apr 16 02:32:53.836252 kernel: Freeing unused kernel image (rodata/data gap) memory: 548K Apr 16 02:32:53.836257 kernel: Run /init as init process Apr 16 02:32:53.836263 kernel: with arguments: Apr 16 02:32:53.836271 kernel: /init Apr 16 02:32:53.836276 kernel: with environment: Apr 16 02:32:53.836282 kernel: HOME=/ Apr 16 02:32:53.836288 kernel: TERM=linux Apr 16 02:32:53.836294 systemd[1]: Successfully made /usr/ read-only. Apr 16 02:32:53.836303 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 02:32:53.836311 systemd[1]: Detected virtualization kvm. Apr 16 02:32:53.836324 systemd[1]: Detected architecture x86-64. Apr 16 02:32:53.836331 systemd[1]: Running in initrd. Apr 16 02:32:53.836337 systemd[1]: No hostname configured, using default hostname. Apr 16 02:32:53.836343 systemd[1]: Hostname set to . Apr 16 02:32:53.836349 systemd[1]: Initializing machine ID from VM UUID. Apr 16 02:32:53.836356 systemd[1]: Queued start job for default target initrd.target. Apr 16 02:32:53.836362 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 02:32:53.836369 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 02:32:53.836376 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 16 02:32:53.836382 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 02:32:53.836389 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 16 02:32:53.836395 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 16 02:32:53.836403 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 16 02:32:53.836410 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 16 02:32:53.836417 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 02:32:53.836423 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 02:32:53.836429 systemd[1]: Reached target paths.target - Path Units. Apr 16 02:32:53.836454 systemd[1]: Reached target slices.target - Slice Units. Apr 16 02:32:53.836465 systemd[1]: Reached target swap.target - Swaps. Apr 16 02:32:53.836475 systemd[1]: Reached target timers.target - Timer Units. Apr 16 02:32:53.836482 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 02:32:53.836488 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 02:32:53.836496 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 02:32:53.836503 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 16 02:32:53.836510 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 02:32:53.836516 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 02:32:53.836522 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 02:32:53.836529 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 02:32:53.836535 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 16 02:32:53.836542 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 02:32:53.836549 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 16 02:32:53.836555 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 16 02:32:53.836562 systemd[1]: Starting systemd-fsck-usr.service... Apr 16 02:32:53.836568 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 02:32:53.836574 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 02:32:53.836580 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 02:32:53.836588 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 16 02:32:53.836594 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 02:32:53.836601 systemd[1]: Finished systemd-fsck-usr.service. Apr 16 02:32:53.836624 systemd-journald[202]: Collecting audit messages is disabled. Apr 16 02:32:53.836643 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 02:32:53.836651 systemd-journald[202]: Journal started Apr 16 02:32:53.836668 systemd-journald[202]: Runtime Journal (/run/log/journal/d1079c1c1a20437bb96faac16334edfe) is 6M, max 48.2M, 42.2M free. Apr 16 02:32:53.828073 systemd-modules-load[203]: Inserted module 'overlay' Apr 16 02:32:53.929475 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 02:32:53.929551 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 16 02:32:53.929571 kernel: Bridge firewalling registered Apr 16 02:32:53.858334 systemd-modules-load[203]: Inserted module 'br_netfilter' Apr 16 02:32:53.929899 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 02:32:53.935864 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 02:32:53.939370 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 02:32:53.944096 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 02:32:53.947088 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 02:32:53.952265 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 02:32:53.952891 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 02:32:53.967645 systemd-tmpfiles[220]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 16 02:32:53.970757 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 02:32:53.985181 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 02:32:54.025755 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 02:32:54.028146 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 02:32:54.044658 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 02:32:54.048615 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 16 02:32:54.062640 systemd-resolved[235]: Positive Trust Anchors: Apr 16 02:32:54.062664 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 02:32:54.062688 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 02:32:54.065342 systemd-resolved[235]: Defaulting to hostname 'linux'. Apr 16 02:32:54.066187 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 02:32:54.067033 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 02:32:54.103498 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=15b40c09f238fba45b5bb3e18ef7e289d4e557e0500075f5731dd7eaa53962ae Apr 16 02:32:54.205277 kernel: SCSI subsystem initialized Apr 16 02:32:54.213359 kernel: Loading iSCSI transport class v2.0-870. Apr 16 02:32:54.224273 kernel: iscsi: registered transport (tcp) Apr 16 02:32:54.243678 kernel: iscsi: registered transport (qla4xxx) Apr 16 02:32:54.243859 kernel: QLogic iSCSI HBA Driver Apr 16 02:32:54.263860 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 02:32:54.288775 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 02:32:54.290430 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 02:32:54.348112 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 16 02:32:54.351526 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 16 02:32:54.412490 kernel: raid6: avx512x4 gen() 32469 MB/s Apr 16 02:32:54.430370 kernel: raid6: avx512x2 gen() 31747 MB/s Apr 16 02:32:54.447309 kernel: raid6: avx512x1 gen() 30811 MB/s Apr 16 02:32:54.465341 kernel: raid6: avx2x4 gen() 34022 MB/s Apr 16 02:32:54.531389 kernel: raid6: avx2x2 gen() 7519 MB/s Apr 16 02:32:54.549423 kernel: raid6: avx2x1 gen() 24505 MB/s Apr 16 02:32:54.549571 kernel: raid6: using algorithm avx2x4 gen() 34022 MB/s Apr 16 02:32:54.567972 kernel: raid6: .... xor() 9898 MB/s, rmw enabled Apr 16 02:32:54.568074 kernel: raid6: using avx512x2 recovery algorithm Apr 16 02:32:54.588380 kernel: xor: automatically using best checksumming function avx Apr 16 02:32:54.783327 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 16 02:32:54.792073 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 16 02:32:54.793698 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 02:32:54.823379 systemd-udevd[453]: Using default interface naming scheme 'v255'. Apr 16 02:32:54.827295 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 02:32:54.831088 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 16 02:32:54.870468 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Apr 16 02:32:54.898635 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 02:32:54.903719 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 02:32:54.941892 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 02:32:54.947010 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 16 02:32:54.983270 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 16 02:32:54.993248 kernel: cryptd: max_cpu_qlen set to 1000 Apr 16 02:32:54.997260 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 16 02:32:55.000728 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 16 02:32:55.000764 kernel: GPT:9289727 != 19775487 Apr 16 02:32:55.000773 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 16 02:32:55.001894 kernel: GPT:9289727 != 19775487 Apr 16 02:32:55.002503 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 16 02:32:55.004865 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 02:32:55.004890 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 16 02:32:55.013245 kernel: libata version 3.00 loaded. Apr 16 02:32:55.016568 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 02:32:55.018756 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 02:32:55.023241 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 02:32:55.028739 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 02:32:55.034398 kernel: ahci 0000:00:1f.2: version 3.0 Apr 16 02:32:55.034566 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 16 02:32:55.034577 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 16 02:32:55.036630 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 16 02:32:55.042620 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 16 02:32:55.042774 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 16 02:32:55.045561 kernel: scsi host0: ahci Apr 16 02:32:55.048251 kernel: AES CTR mode by8 optimization enabled Apr 16 02:32:55.052392 kernel: scsi host1: ahci Apr 16 02:32:55.055701 kernel: scsi host2: ahci Apr 16 02:32:55.055845 kernel: scsi host3: ahci Apr 16 02:32:55.057738 kernel: scsi host4: ahci Apr 16 02:32:55.061273 kernel: scsi host5: ahci Apr 16 02:32:55.061469 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Apr 16 02:32:55.061486 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Apr 16 02:32:55.066837 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Apr 16 02:32:55.066872 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Apr 16 02:32:55.066889 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Apr 16 02:32:55.068755 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Apr 16 02:32:55.071558 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 16 02:32:55.087070 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 02:32:55.172868 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 16 02:32:55.175940 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 02:32:55.189977 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 16 02:32:55.192646 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 16 02:32:55.202841 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 16 02:32:55.229148 disk-uuid[640]: Primary Header is updated. Apr 16 02:32:55.229148 disk-uuid[640]: Secondary Entries is updated. Apr 16 02:32:55.229148 disk-uuid[640]: Secondary Header is updated. Apr 16 02:32:55.237319 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 02:32:55.378428 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 16 02:32:55.378537 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 16 02:32:55.380294 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 16 02:32:55.383276 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 16 02:32:55.386042 kernel: ata3.00: LPM support broken, forcing max_power Apr 16 02:32:55.386091 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 16 02:32:55.386123 kernel: ata3.00: applying bridge limits Apr 16 02:32:55.388287 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 16 02:32:55.391285 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 16 02:32:55.391328 kernel: ata3.00: LPM support broken, forcing max_power Apr 16 02:32:55.392771 kernel: ata3.00: configured for UDMA/100 Apr 16 02:32:55.395302 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 16 02:32:55.453820 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 16 02:32:55.454164 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 16 02:32:55.467442 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 16 02:32:55.802973 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 16 02:32:55.805181 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 02:32:55.809909 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 02:32:55.812281 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 02:32:55.817626 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 16 02:32:55.849926 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 16 02:32:56.246081 disk-uuid[641]: The operation has completed successfully. Apr 16 02:32:56.247770 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 02:32:56.272259 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 16 02:32:56.272516 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 16 02:32:56.316972 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 16 02:32:56.330798 sh[681]: Success Apr 16 02:32:56.352269 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 16 02:32:56.352341 kernel: device-mapper: uevent: version 1.0.3 Apr 16 02:32:56.354811 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 16 02:32:56.365253 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 16 02:32:56.398030 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 16 02:32:56.401914 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 16 02:32:56.415791 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 16 02:32:56.425124 kernel: BTRFS: device fsid 20ab7e7c-5d1e-4cd5-bec1-5b111d7138f2 devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (693) Apr 16 02:32:56.425149 kernel: BTRFS info (device dm-0): first mount of filesystem 20ab7e7c-5d1e-4cd5-bec1-5b111d7138f2 Apr 16 02:32:56.425161 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 16 02:32:56.430937 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 16 02:32:56.430982 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 16 02:32:56.432619 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 16 02:32:56.433129 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 16 02:32:56.433572 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 16 02:32:56.434344 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 16 02:32:56.435027 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 16 02:32:56.466306 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (724) Apr 16 02:32:56.470409 kernel: BTRFS info (device vda6): first mount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 02:32:56.470480 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 02:32:56.475898 kernel: BTRFS info (device vda6): turning on async discard Apr 16 02:32:56.475960 kernel: BTRFS info (device vda6): enabling free space tree Apr 16 02:32:56.482245 kernel: BTRFS info (device vda6): last unmount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 02:32:56.482698 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 16 02:32:56.485945 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 16 02:32:56.578680 ignition[777]: Ignition 2.22.0 Apr 16 02:32:56.578705 ignition[777]: Stage: fetch-offline Apr 16 02:32:56.578732 ignition[777]: no configs at "/usr/lib/ignition/base.d" Apr 16 02:32:56.578740 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 02:32:56.578825 ignition[777]: parsed url from cmdline: "" Apr 16 02:32:56.578828 ignition[777]: no config URL provided Apr 16 02:32:56.578833 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 02:32:56.587580 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 02:32:56.578840 ignition[777]: no config at "/usr/lib/ignition/user.ign" Apr 16 02:32:56.591649 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 02:32:56.578862 ignition[777]: op(1): [started] loading QEMU firmware config module Apr 16 02:32:56.578866 ignition[777]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 16 02:32:56.603910 ignition[777]: op(1): [finished] loading QEMU firmware config module Apr 16 02:32:56.637262 systemd-networkd[870]: lo: Link UP Apr 16 02:32:56.637281 systemd-networkd[870]: lo: Gained carrier Apr 16 02:32:56.638801 systemd-networkd[870]: Enumeration completed Apr 16 02:32:56.639187 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 02:32:56.640547 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 02:32:56.640550 systemd-networkd[870]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 02:32:56.642366 systemd[1]: Reached target network.target - Network. Apr 16 02:32:56.643282 systemd-networkd[870]: eth0: Link UP Apr 16 02:32:56.643957 systemd-networkd[870]: eth0: Gained carrier Apr 16 02:32:56.643971 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 02:32:56.721340 systemd-networkd[870]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 02:32:56.798953 ignition[777]: parsing config with SHA512: 0a7ca630cda102347863f498cdc96940134b48dcaee8b474617742cf1faed42ea113fbfbc7b8828ab898882a4b6a1dac3f0457dc24d85c284081d145837881f3 Apr 16 02:32:56.806186 unknown[777]: fetched base config from "system" Apr 16 02:32:56.806257 unknown[777]: fetched user config from "qemu" Apr 16 02:32:56.810358 ignition[777]: fetch-offline: fetch-offline passed Apr 16 02:32:56.810447 ignition[777]: Ignition finished successfully Apr 16 02:32:56.815266 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 02:32:56.819419 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 16 02:32:56.820276 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 16 02:32:56.863739 ignition[876]: Ignition 2.22.0 Apr 16 02:32:56.863766 ignition[876]: Stage: kargs Apr 16 02:32:56.863905 ignition[876]: no configs at "/usr/lib/ignition/base.d" Apr 16 02:32:56.863912 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 02:32:56.864711 ignition[876]: kargs: kargs passed Apr 16 02:32:56.864785 ignition[876]: Ignition finished successfully Apr 16 02:32:56.872502 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 16 02:32:56.878891 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 16 02:32:56.920806 ignition[884]: Ignition 2.22.0 Apr 16 02:32:56.920959 ignition[884]: Stage: disks Apr 16 02:32:56.921183 ignition[884]: no configs at "/usr/lib/ignition/base.d" Apr 16 02:32:56.921193 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 02:32:56.922353 ignition[884]: disks: disks passed Apr 16 02:32:56.926791 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 16 02:32:56.922415 ignition[884]: Ignition finished successfully Apr 16 02:32:56.932099 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 16 02:32:56.937710 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 02:32:56.937868 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 02:32:56.944839 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 02:32:56.947615 systemd[1]: Reached target basic.target - Basic System. Apr 16 02:32:56.952324 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 16 02:32:56.985811 systemd-fsck[895]: ROOT: clean, 15/553520 files, 52789/553472 blocks Apr 16 02:32:56.990985 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 16 02:32:56.992120 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 16 02:32:57.104248 kernel: EXT4-fs (vda9): mounted filesystem 75cd5b5e-229f-474b-8de5-870bc4bccaf1 r/w with ordered data mode. Quota mode: none. Apr 16 02:32:57.104839 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 16 02:32:57.107815 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 16 02:32:57.112572 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 02:32:57.116133 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 16 02:32:57.118658 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 16 02:32:57.118755 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 16 02:32:57.143255 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (903) Apr 16 02:32:57.143286 kernel: BTRFS info (device vda6): first mount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 02:32:57.143298 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 02:32:57.118792 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 02:32:57.125382 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 16 02:32:57.129538 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 16 02:32:57.158393 kernel: BTRFS info (device vda6): turning on async discard Apr 16 02:32:57.158435 kernel: BTRFS info (device vda6): enabling free space tree Apr 16 02:32:57.159322 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 02:32:57.183942 initrd-setup-root[927]: cut: /sysroot/etc/passwd: No such file or directory Apr 16 02:32:57.193599 initrd-setup-root[934]: cut: /sysroot/etc/group: No such file or directory Apr 16 02:32:57.203400 initrd-setup-root[941]: cut: /sysroot/etc/shadow: No such file or directory Apr 16 02:32:57.210749 initrd-setup-root[948]: cut: /sysroot/etc/gshadow: No such file or directory Apr 16 02:32:57.382627 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 16 02:32:57.386263 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 16 02:32:57.388876 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 16 02:32:57.415301 kernel: BTRFS info (device vda6): last unmount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 02:32:57.426922 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 16 02:32:57.435887 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 16 02:32:57.457120 ignition[1017]: INFO : Ignition 2.22.0 Apr 16 02:32:57.457120 ignition[1017]: INFO : Stage: mount Apr 16 02:32:57.461146 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 02:32:57.461146 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 02:32:57.461146 ignition[1017]: INFO : mount: mount passed Apr 16 02:32:57.461146 ignition[1017]: INFO : Ignition finished successfully Apr 16 02:32:57.469139 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 16 02:32:57.473145 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 16 02:32:57.494513 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 02:32:57.521257 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1029) Apr 16 02:32:57.521305 kernel: BTRFS info (device vda6): first mount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 02:32:57.524207 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 02:32:57.530373 kernel: BTRFS info (device vda6): turning on async discard Apr 16 02:32:57.530479 kernel: BTRFS info (device vda6): enabling free space tree Apr 16 02:32:57.533876 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 02:32:57.568186 ignition[1046]: INFO : Ignition 2.22.0 Apr 16 02:32:57.568186 ignition[1046]: INFO : Stage: files Apr 16 02:32:57.570634 ignition[1046]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 02:32:57.570634 ignition[1046]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 02:32:57.574547 ignition[1046]: DEBUG : files: compiled without relabeling support, skipping Apr 16 02:32:57.577424 ignition[1046]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 16 02:32:57.577424 ignition[1046]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 16 02:32:57.584415 ignition[1046]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 16 02:32:57.588027 ignition[1046]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 16 02:32:57.588027 ignition[1046]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 16 02:32:57.586595 unknown[1046]: wrote ssh authorized keys file for user: core Apr 16 02:32:57.596679 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 02:32:57.596679 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 16 02:32:57.659726 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 16 02:32:57.777086 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 02:32:57.777086 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 16 02:32:57.817263 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 16 02:32:57.817263 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 16 02:32:57.817263 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 16 02:32:57.817263 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 02:32:57.817263 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 02:32:57.817263 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 02:32:57.817263 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 02:32:57.817263 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 02:32:57.817263 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 02:32:57.817263 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 02:32:57.817263 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 02:32:57.817263 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 02:32:57.817263 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 16 02:32:58.094054 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 16 02:32:58.389142 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 02:32:58.389142 ignition[1046]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 16 02:32:58.396831 ignition[1046]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 02:32:58.401354 ignition[1046]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 02:32:58.401354 ignition[1046]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 16 02:32:58.401354 ignition[1046]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 16 02:32:58.409350 ignition[1046]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 02:32:58.409350 ignition[1046]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 02:32:58.409350 ignition[1046]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 16 02:32:58.409350 ignition[1046]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 16 02:32:58.437121 ignition[1046]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 02:32:58.447001 ignition[1046]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 02:32:58.450870 ignition[1046]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 16 02:32:58.450870 ignition[1046]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 16 02:32:58.450870 ignition[1046]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 16 02:32:58.450870 ignition[1046]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 16 02:32:58.450870 ignition[1046]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 16 02:32:58.450870 ignition[1046]: INFO : files: files passed Apr 16 02:32:58.450870 ignition[1046]: INFO : Ignition finished successfully Apr 16 02:32:58.450073 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 16 02:32:58.452485 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 16 02:32:58.512854 systemd-networkd[870]: eth0: Gained IPv6LL Apr 16 02:32:58.514400 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 16 02:32:58.534297 initrd-setup-root-after-ignition[1074]: grep: /sysroot/oem/oem-release: No such file or directory Apr 16 02:32:58.519850 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 16 02:32:58.539065 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 02:32:58.539065 initrd-setup-root-after-ignition[1077]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 16 02:32:58.520072 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 16 02:32:58.553985 initrd-setup-root-after-ignition[1081]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 02:32:58.537959 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 02:32:58.542260 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 16 02:32:58.545073 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 16 02:32:58.621329 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 16 02:32:58.621567 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 16 02:32:58.626030 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 16 02:32:58.630785 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 16 02:32:58.633204 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 16 02:32:58.637100 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 16 02:32:58.669149 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 02:32:58.674256 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 16 02:32:58.703645 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 16 02:32:58.705589 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 02:32:58.708976 systemd[1]: Stopped target timers.target - Timer Units. Apr 16 02:32:58.712033 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 16 02:32:58.712167 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 02:32:58.718382 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 16 02:32:58.720552 systemd[1]: Stopped target basic.target - Basic System. Apr 16 02:32:58.723583 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 16 02:32:58.726483 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 02:32:58.728019 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 16 02:32:58.733450 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 16 02:32:58.735131 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 16 02:32:58.738272 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 02:32:58.741680 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 16 02:32:58.745170 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 16 02:32:58.750877 systemd[1]: Stopped target swap.target - Swaps. Apr 16 02:32:58.752389 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 16 02:32:58.752570 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 16 02:32:58.757907 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 16 02:32:58.759687 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 02:32:58.763066 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 16 02:32:58.763364 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 02:32:58.766769 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 16 02:32:58.766896 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 16 02:32:58.771918 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 16 02:32:58.772135 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 02:32:58.775511 systemd[1]: Stopped target paths.target - Path Units. Apr 16 02:32:58.778361 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 16 02:32:58.782840 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 02:32:58.788648 systemd[1]: Stopped target slices.target - Slice Units. Apr 16 02:32:58.788801 systemd[1]: Stopped target sockets.target - Socket Units. Apr 16 02:32:58.793918 systemd[1]: iscsid.socket: Deactivated successfully. Apr 16 02:32:58.794172 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 02:32:58.797060 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 16 02:32:58.797165 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 02:32:58.800522 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 16 02:32:58.800711 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 02:32:58.803933 systemd[1]: ignition-files.service: Deactivated successfully. Apr 16 02:32:58.804043 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 16 02:32:58.810292 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 16 02:32:58.815737 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 16 02:32:58.815877 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 02:32:58.823198 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 16 02:32:58.825958 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 16 02:32:58.826095 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 02:32:58.831362 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 16 02:32:58.831441 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 02:32:58.837395 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 16 02:32:58.837520 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 16 02:32:58.848092 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 16 02:32:58.857366 ignition[1101]: INFO : Ignition 2.22.0 Apr 16 02:32:58.857366 ignition[1101]: INFO : Stage: umount Apr 16 02:32:58.861358 ignition[1101]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 02:32:58.861358 ignition[1101]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 02:32:58.861358 ignition[1101]: INFO : umount: umount passed Apr 16 02:32:58.861358 ignition[1101]: INFO : Ignition finished successfully Apr 16 02:32:58.860182 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 16 02:32:58.860319 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 16 02:32:58.861642 systemd[1]: Stopped target network.target - Network. Apr 16 02:32:58.870094 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 16 02:32:58.870156 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 16 02:32:58.873744 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 16 02:32:58.873784 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 16 02:32:58.877748 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 16 02:32:58.877803 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 16 02:32:58.881959 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 16 02:32:58.882006 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 16 02:32:58.888921 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 16 02:32:58.893149 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 16 02:32:58.899934 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 16 02:32:58.900004 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 16 02:32:58.903155 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 16 02:32:58.903261 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 16 02:32:58.910091 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 16 02:32:58.910263 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 16 02:32:58.916782 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 16 02:32:58.916959 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 16 02:32:58.917043 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 16 02:32:58.924850 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 16 02:32:58.925595 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 16 02:32:58.929164 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 16 02:32:58.929293 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 16 02:32:58.931057 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 16 02:32:58.935359 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 16 02:32:58.935423 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 02:32:58.941793 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 02:32:58.941862 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 02:32:58.946340 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 16 02:32:58.946406 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 16 02:32:58.948070 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 16 02:32:58.948113 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 02:32:58.956368 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 02:32:58.963690 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 16 02:32:58.963794 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 16 02:32:59.026071 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 16 02:32:59.026284 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 02:32:59.028061 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 16 02:32:59.028099 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 16 02:32:59.032649 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 16 02:32:59.032702 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 02:32:59.032814 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 16 02:32:59.032846 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 16 02:32:59.038809 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 16 02:32:59.038861 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 16 02:32:59.043262 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 02:32:59.043310 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 02:32:59.049373 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 16 02:32:59.052237 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 16 02:32:59.052283 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 02:32:59.057486 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 16 02:32:59.057521 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 02:32:59.062892 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 02:32:59.062930 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 02:32:59.070608 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Apr 16 02:32:59.070647 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 16 02:32:59.070672 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 16 02:32:59.070892 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 16 02:32:59.070968 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 16 02:32:59.073598 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 16 02:32:59.073686 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 16 02:32:59.078940 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 16 02:32:59.083901 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 16 02:32:59.114767 systemd[1]: Switching root. Apr 16 02:32:59.150126 systemd-journald[202]: Journal stopped Apr 16 02:32:59.966673 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Apr 16 02:32:59.966722 kernel: SELinux: policy capability network_peer_controls=1 Apr 16 02:32:59.966736 kernel: SELinux: policy capability open_perms=1 Apr 16 02:32:59.966746 kernel: SELinux: policy capability extended_socket_class=1 Apr 16 02:32:59.966757 kernel: SELinux: policy capability always_check_network=0 Apr 16 02:32:59.966767 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 16 02:32:59.966777 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 16 02:32:59.966784 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 16 02:32:59.966792 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 16 02:32:59.966803 kernel: SELinux: policy capability userspace_initial_context=0 Apr 16 02:32:59.966811 kernel: audit: type=1403 audit(1776306779.290:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 16 02:32:59.966821 systemd[1]: Successfully loaded SELinux policy in 51.140ms. Apr 16 02:32:59.966831 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.436ms. Apr 16 02:32:59.966840 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 02:32:59.966848 systemd[1]: Detected virtualization kvm. Apr 16 02:32:59.966855 systemd[1]: Detected architecture x86-64. Apr 16 02:32:59.966865 systemd[1]: Detected first boot. Apr 16 02:32:59.966873 systemd[1]: Initializing machine ID from VM UUID. Apr 16 02:32:59.966881 zram_generator::config[1147]: No configuration found. Apr 16 02:32:59.966891 kernel: Guest personality initialized and is inactive Apr 16 02:32:59.966900 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 16 02:32:59.966908 kernel: Initialized host personality Apr 16 02:32:59.966915 kernel: NET: Registered PF_VSOCK protocol family Apr 16 02:32:59.966923 systemd[1]: Populated /etc with preset unit settings. Apr 16 02:32:59.966931 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 16 02:32:59.966939 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 16 02:32:59.966947 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 16 02:32:59.966955 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 16 02:32:59.966963 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 16 02:32:59.966972 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 16 02:32:59.966980 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 16 02:32:59.966988 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 16 02:32:59.966996 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 16 02:32:59.967003 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 16 02:32:59.967011 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 16 02:32:59.967019 systemd[1]: Created slice user.slice - User and Session Slice. Apr 16 02:32:59.967027 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 02:32:59.967036 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 02:32:59.967043 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 16 02:32:59.967051 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 16 02:32:59.967061 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 16 02:32:59.967068 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 02:32:59.967076 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 16 02:32:59.967083 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 02:32:59.967091 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 02:32:59.967100 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 16 02:32:59.967108 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 16 02:32:59.967116 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 16 02:32:59.967123 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 16 02:32:59.967131 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 02:32:59.967139 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 02:32:59.967146 systemd[1]: Reached target slices.target - Slice Units. Apr 16 02:32:59.967154 systemd[1]: Reached target swap.target - Swaps. Apr 16 02:32:59.967161 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 16 02:32:59.967170 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 16 02:32:59.967178 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 16 02:32:59.967186 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 02:32:59.967194 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 02:32:59.967201 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 02:32:59.967209 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 16 02:32:59.967242 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 16 02:32:59.967251 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 16 02:32:59.967259 systemd[1]: Mounting media.mount - External Media Directory... Apr 16 02:32:59.967275 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 02:32:59.967283 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 16 02:32:59.967291 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 16 02:32:59.967299 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 16 02:32:59.967309 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 16 02:32:59.967318 systemd[1]: Reached target machines.target - Containers. Apr 16 02:32:59.967326 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 16 02:32:59.967334 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 02:32:59.967343 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 02:32:59.967351 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 16 02:32:59.967359 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 02:32:59.967367 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 02:32:59.967375 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 02:32:59.967382 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 16 02:32:59.967390 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 02:32:59.967397 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 16 02:32:59.967405 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 16 02:32:59.967414 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 16 02:32:59.967422 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 16 02:32:59.967430 systemd[1]: Stopped systemd-fsck-usr.service. Apr 16 02:32:59.967438 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 02:32:59.967446 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 02:32:59.967454 kernel: loop: module loaded Apr 16 02:32:59.967461 kernel: fuse: init (API version 7.41) Apr 16 02:32:59.967468 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 02:32:59.967488 kernel: ACPI: bus type drm_connector registered Apr 16 02:32:59.967498 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 02:32:59.967506 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 16 02:32:59.967514 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 16 02:32:59.967521 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 02:32:59.967543 systemd-journald[1218]: Collecting audit messages is disabled. Apr 16 02:32:59.967562 systemd-journald[1218]: Journal started Apr 16 02:32:59.967579 systemd-journald[1218]: Runtime Journal (/run/log/journal/d1079c1c1a20437bb96faac16334edfe) is 6M, max 48.2M, 42.2M free. Apr 16 02:32:59.648371 systemd[1]: Queued start job for default target multi-user.target. Apr 16 02:32:59.659179 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 16 02:32:59.659592 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 16 02:32:59.972416 systemd[1]: verity-setup.service: Deactivated successfully. Apr 16 02:32:59.982766 systemd[1]: Stopped verity-setup.service. Apr 16 02:33:00.017309 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 02:33:00.023353 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 02:33:00.023840 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 16 02:33:00.025827 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 16 02:33:00.027713 systemd[1]: Mounted media.mount - External Media Directory. Apr 16 02:33:00.029450 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 16 02:33:00.031428 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 16 02:33:00.033755 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 16 02:33:00.036572 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 16 02:33:00.039193 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 02:33:00.042072 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 16 02:33:00.042372 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 16 02:33:00.045541 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 02:33:00.045685 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 02:33:00.048738 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 02:33:00.049117 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 02:33:00.051542 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 02:33:00.051695 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 02:33:00.053785 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 16 02:33:00.054001 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 16 02:33:00.056144 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 02:33:00.056343 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 02:33:00.058764 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 02:33:00.061753 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 02:33:00.064974 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 16 02:33:00.067496 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 16 02:33:00.078641 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 02:33:00.083915 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 02:33:00.086973 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 16 02:33:00.090302 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 16 02:33:00.092532 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 16 02:33:00.092655 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 02:33:00.095413 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 16 02:33:00.103387 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 16 02:33:00.107537 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 02:33:00.108513 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 16 02:33:00.111464 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 16 02:33:00.113846 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 02:33:00.114700 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 16 02:33:00.116702 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 02:33:00.117395 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 02:33:00.123344 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 16 02:33:00.129340 systemd-journald[1218]: Time spent on flushing to /var/log/journal/d1079c1c1a20437bb96faac16334edfe is 21.835ms for 983 entries. Apr 16 02:33:00.129340 systemd-journald[1218]: System Journal (/var/log/journal/d1079c1c1a20437bb96faac16334edfe) is 8M, max 195.6M, 187.6M free. Apr 16 02:33:00.171638 systemd-journald[1218]: Received client request to flush runtime journal. Apr 16 02:33:00.171786 kernel: loop0: detected capacity change from 0 to 128560 Apr 16 02:33:00.126391 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 16 02:33:00.131926 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 16 02:33:00.132099 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 16 02:33:00.141693 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 02:33:00.144140 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 16 02:33:00.148036 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 16 02:33:00.154380 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 16 02:33:00.175662 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 16 02:33:00.190536 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 16 02:33:00.191591 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 16 02:33:00.192537 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 16 02:33:00.196656 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 16 02:33:00.202290 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 02:33:00.211260 kernel: loop1: detected capacity change from 0 to 219192 Apr 16 02:33:00.231109 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Apr 16 02:33:00.231132 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Apr 16 02:33:00.235695 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 02:33:00.237723 kernel: loop2: detected capacity change from 0 to 110984 Apr 16 02:33:00.267267 kernel: loop3: detected capacity change from 0 to 128560 Apr 16 02:33:00.278301 kernel: loop4: detected capacity change from 0 to 219192 Apr 16 02:33:00.290243 kernel: loop5: detected capacity change from 0 to 110984 Apr 16 02:33:00.297340 (sd-merge)[1292]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 16 02:33:00.297688 (sd-merge)[1292]: Merged extensions into '/usr'. Apr 16 02:33:00.301100 systemd[1]: Reload requested from client PID 1268 ('systemd-sysext') (unit systemd-sysext.service)... Apr 16 02:33:00.301197 systemd[1]: Reloading... Apr 16 02:33:00.363372 zram_generator::config[1318]: No configuration found. Apr 16 02:33:00.455361 ldconfig[1263]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 16 02:33:00.567781 systemd[1]: Reloading finished in 266 ms. Apr 16 02:33:00.590824 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 16 02:33:00.592870 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 16 02:33:00.610576 systemd[1]: Starting ensure-sysext.service... Apr 16 02:33:00.612848 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 02:33:00.619565 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 16 02:33:00.622919 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 02:33:00.626581 systemd[1]: Reload requested from client PID 1356 ('systemctl') (unit ensure-sysext.service)... Apr 16 02:33:00.626604 systemd[1]: Reloading... Apr 16 02:33:00.626947 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 16 02:33:00.626997 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 16 02:33:00.627283 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 02:33:00.627551 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 02:33:00.628439 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 02:33:00.628852 systemd-tmpfiles[1357]: ACLs are not supported, ignoring. Apr 16 02:33:00.628947 systemd-tmpfiles[1357]: ACLs are not supported, ignoring. Apr 16 02:33:00.632841 systemd-tmpfiles[1357]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 02:33:00.632851 systemd-tmpfiles[1357]: Skipping /boot Apr 16 02:33:00.642015 systemd-tmpfiles[1357]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 02:33:00.642038 systemd-tmpfiles[1357]: Skipping /boot Apr 16 02:33:00.652836 systemd-udevd[1360]: Using default interface naming scheme 'v255'. Apr 16 02:33:00.661260 zram_generator::config[1381]: No configuration found. Apr 16 02:33:00.776266 kernel: mousedev: PS/2 mouse device common for all mice Apr 16 02:33:00.792345 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 16 02:33:00.799275 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 16 02:33:00.808253 kernel: ACPI: button: Power Button [PWRF] Apr 16 02:33:00.816284 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 16 02:33:00.928010 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 16 02:33:00.928385 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 02:33:00.932976 systemd[1]: Reloading finished in 306 ms. Apr 16 02:33:00.960822 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 02:33:01.022178 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 02:33:01.116867 systemd[1]: Finished ensure-sysext.service. Apr 16 02:33:01.121264 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 02:33:01.123350 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 02:33:01.128300 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 16 02:33:01.131648 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 02:33:01.133389 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 02:33:01.144441 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 02:33:01.147805 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 02:33:01.151066 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 02:33:01.153120 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 02:33:01.154704 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 16 02:33:01.156519 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 02:33:01.159998 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 16 02:33:01.166626 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 02:33:01.171113 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 02:33:01.178274 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 16 02:33:01.182107 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 16 02:33:01.187575 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 02:33:01.190596 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 02:33:01.191535 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 02:33:01.192087 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 02:33:01.195065 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 02:33:01.195629 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 02:33:01.198383 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 02:33:01.200930 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 02:33:01.204463 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 02:33:01.204820 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 02:33:01.210037 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 16 02:33:01.216107 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 16 02:33:01.219894 augenrules[1511]: No rules Apr 16 02:33:01.219860 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 02:33:01.220028 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 02:33:01.226851 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 16 02:33:01.228950 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 02:33:01.228997 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 02:33:01.231392 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 16 02:33:01.245828 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 16 02:33:01.248622 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 16 02:33:01.252808 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 16 02:33:01.339652 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 16 02:33:01.344797 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 02:33:01.346921 systemd-networkd[1491]: lo: Link UP Apr 16 02:33:01.346928 systemd-networkd[1491]: lo: Gained carrier Apr 16 02:33:01.347707 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 02:33:01.347803 systemd-networkd[1491]: Enumeration completed Apr 16 02:33:01.348058 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 02:33:01.348172 systemd-networkd[1491]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 02:33:01.348187 systemd-networkd[1491]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 02:33:01.349068 systemd-resolved[1494]: Positive Trust Anchors: Apr 16 02:33:01.349079 systemd-resolved[1494]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 02:33:01.349112 systemd-resolved[1494]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 02:33:01.349472 systemd-networkd[1491]: eth0: Link UP Apr 16 02:33:01.349610 systemd-networkd[1491]: eth0: Gained carrier Apr 16 02:33:01.349632 systemd-networkd[1491]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 02:33:01.351101 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 16 02:33:01.353157 systemd[1]: Reached target time-set.target - System Time Set. Apr 16 02:33:01.353804 systemd-resolved[1494]: Defaulting to hostname 'linux'. Apr 16 02:33:01.356384 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 16 02:33:01.359391 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 16 02:33:01.361197 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 02:33:01.362323 systemd-networkd[1491]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 02:33:01.363122 systemd-timesyncd[1496]: Network configuration changed, trying to establish connection. Apr 16 02:33:01.363202 systemd[1]: Reached target network.target - Network. Apr 16 02:33:01.364402 systemd-timesyncd[1496]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 16 02:33:01.364470 systemd-timesyncd[1496]: Initial clock synchronization to Thu 2026-04-16 02:33:01.244821 UTC. Apr 16 02:33:01.365476 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 02:33:01.368044 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 02:33:01.370478 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 16 02:33:01.373128 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 16 02:33:01.375511 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 16 02:33:01.378309 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 16 02:33:01.380404 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 16 02:33:01.382130 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 16 02:33:01.383848 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 16 02:33:01.383882 systemd[1]: Reached target paths.target - Path Units. Apr 16 02:33:01.385121 systemd[1]: Reached target timers.target - Timer Units. Apr 16 02:33:01.387069 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 16 02:33:01.389607 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 16 02:33:01.392879 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 16 02:33:01.395131 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 16 02:33:01.396893 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 16 02:33:01.404439 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 16 02:33:01.406619 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 16 02:33:01.409476 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 16 02:33:01.411541 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 16 02:33:01.414865 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 02:33:01.416468 systemd[1]: Reached target basic.target - Basic System. Apr 16 02:33:01.417906 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 16 02:33:01.417951 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 16 02:33:01.419279 systemd[1]: Starting containerd.service - containerd container runtime... Apr 16 02:33:01.421938 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 16 02:33:01.444509 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 16 02:33:01.447782 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 16 02:33:01.456873 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 16 02:33:01.459085 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 16 02:33:01.460560 jq[1549]: false Apr 16 02:33:01.461832 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 16 02:33:01.465119 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 16 02:33:01.467817 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 16 02:33:01.472359 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 16 02:33:01.476829 extend-filesystems[1550]: Found /dev/vda6 Apr 16 02:33:01.478460 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 16 02:33:01.483075 extend-filesystems[1550]: Found /dev/vda9 Apr 16 02:33:01.484652 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 16 02:33:01.485140 extend-filesystems[1550]: Checking size of /dev/vda9 Apr 16 02:33:01.487358 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 16 02:33:01.487895 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 16 02:33:01.494257 google_oslogin_nss_cache[1551]: oslogin_cache_refresh[1551]: Refreshing passwd entry cache Apr 16 02:33:01.491536 oslogin_cache_refresh[1551]: Refreshing passwd entry cache Apr 16 02:33:01.494423 systemd[1]: Starting update-engine.service - Update Engine... Apr 16 02:33:01.498588 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 16 02:33:01.499708 extend-filesystems[1550]: Resized partition /dev/vda9 Apr 16 02:33:01.502387 extend-filesystems[1574]: resize2fs 1.47.3 (8-Jul-2025) Apr 16 02:33:01.504010 google_oslogin_nss_cache[1551]: oslogin_cache_refresh[1551]: Failure getting users, quitting Apr 16 02:33:01.504010 google_oslogin_nss_cache[1551]: oslogin_cache_refresh[1551]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 16 02:33:01.504010 google_oslogin_nss_cache[1551]: oslogin_cache_refresh[1551]: Refreshing group entry cache Apr 16 02:33:01.503460 oslogin_cache_refresh[1551]: Failure getting users, quitting Apr 16 02:33:01.503474 oslogin_cache_refresh[1551]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 16 02:33:01.503536 oslogin_cache_refresh[1551]: Refreshing group entry cache Apr 16 02:33:01.504598 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 16 02:33:01.507831 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 16 02:33:01.508685 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 16 02:33:01.509301 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 16 02:33:01.511826 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 16 02:33:01.511958 google_oslogin_nss_cache[1551]: oslogin_cache_refresh[1551]: Failure getting groups, quitting Apr 16 02:33:01.511958 google_oslogin_nss_cache[1551]: oslogin_cache_refresh[1551]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 16 02:33:01.511873 oslogin_cache_refresh[1551]: Failure getting groups, quitting Apr 16 02:33:01.511883 oslogin_cache_refresh[1551]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 16 02:33:01.516107 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 16 02:33:01.520878 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 16 02:33:01.521100 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 16 02:33:01.524093 systemd[1]: motdgen.service: Deactivated successfully. Apr 16 02:33:01.524436 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 16 02:33:01.524677 jq[1573]: true Apr 16 02:33:01.550771 (ntainerd)[1580]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 16 02:33:01.563704 jq[1579]: true Apr 16 02:33:01.573253 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 16 02:33:01.575961 update_engine[1566]: I20260416 02:33:01.575809 1566 main.cc:92] Flatcar Update Engine starting Apr 16 02:33:01.586615 tar[1577]: linux-amd64/LICENSE Apr 16 02:33:01.589073 extend-filesystems[1574]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 16 02:33:01.589073 extend-filesystems[1574]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 16 02:33:01.589073 extend-filesystems[1574]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 16 02:33:01.603604 extend-filesystems[1550]: Resized filesystem in /dev/vda9 Apr 16 02:33:01.593819 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 16 02:33:01.605750 tar[1577]: linux-amd64/helm Apr 16 02:33:01.594265 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 16 02:33:01.612960 dbus-daemon[1547]: [system] SELinux support is enabled Apr 16 02:33:01.613101 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 16 02:33:01.615129 systemd-logind[1562]: Watching system buttons on /dev/input/event2 (Power Button) Apr 16 02:33:01.615147 systemd-logind[1562]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 16 02:33:01.615782 systemd-logind[1562]: New seat seat0. Apr 16 02:33:01.618002 systemd[1]: Started systemd-logind.service - User Login Management. Apr 16 02:33:01.620772 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 16 02:33:01.620793 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 16 02:33:01.623777 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 16 02:33:01.623799 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 16 02:33:01.635384 bash[1610]: Updated "/home/core/.ssh/authorized_keys" Apr 16 02:33:01.637920 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 16 02:33:01.640397 dbus-daemon[1547]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 16 02:33:01.641863 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 16 02:33:01.642392 update_engine[1566]: I20260416 02:33:01.642270 1566 update_check_scheduler.cc:74] Next update check in 5m51s Apr 16 02:33:01.642435 systemd[1]: Started update-engine.service - Update Engine. Apr 16 02:33:01.650535 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 16 02:33:01.721138 locksmithd[1612]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 16 02:33:01.725364 sshd_keygen[1578]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 16 02:33:01.751035 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 16 02:33:01.755950 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 16 02:33:01.767697 containerd[1580]: time="2026-04-16T02:33:01Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 16 02:33:01.768632 containerd[1580]: time="2026-04-16T02:33:01.768304371Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 16 02:33:01.776690 systemd[1]: issuegen.service: Deactivated successfully. Apr 16 02:33:01.777033 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 16 02:33:01.781010 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 16 02:33:01.784102 containerd[1580]: time="2026-04-16T02:33:01.784061086Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.526µs" Apr 16 02:33:01.784155 containerd[1580]: time="2026-04-16T02:33:01.784104338Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 16 02:33:01.784155 containerd[1580]: time="2026-04-16T02:33:01.784125817Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 16 02:33:01.784547 containerd[1580]: time="2026-04-16T02:33:01.784504895Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 16 02:33:01.784547 containerd[1580]: time="2026-04-16T02:33:01.784547678Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 16 02:33:01.784623 containerd[1580]: time="2026-04-16T02:33:01.784576403Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 02:33:01.784657 containerd[1580]: time="2026-04-16T02:33:01.784626904Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 02:33:01.784657 containerd[1580]: time="2026-04-16T02:33:01.784638242Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 02:33:01.785003 containerd[1580]: time="2026-04-16T02:33:01.784867722Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 02:33:01.785003 containerd[1580]: time="2026-04-16T02:33:01.784890479Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 02:33:01.785003 containerd[1580]: time="2026-04-16T02:33:01.784908137Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 02:33:01.785003 containerd[1580]: time="2026-04-16T02:33:01.784918063Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 16 02:33:01.785003 containerd[1580]: time="2026-04-16T02:33:01.784993383Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 16 02:33:01.785280 containerd[1580]: time="2026-04-16T02:33:01.785258946Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 02:33:01.785316 containerd[1580]: time="2026-04-16T02:33:01.785298212Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 02:33:01.785339 containerd[1580]: time="2026-04-16T02:33:01.785317905Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 16 02:33:01.785384 containerd[1580]: time="2026-04-16T02:33:01.785352377Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 16 02:33:01.785715 containerd[1580]: time="2026-04-16T02:33:01.785692275Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 16 02:33:01.785811 containerd[1580]: time="2026-04-16T02:33:01.785800446Z" level=info msg="metadata content store policy set" policy=shared Apr 16 02:33:01.791332 containerd[1580]: time="2026-04-16T02:33:01.791295305Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 16 02:33:01.791599 containerd[1580]: time="2026-04-16T02:33:01.791481475Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 16 02:33:01.791667 containerd[1580]: time="2026-04-16T02:33:01.791655678Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 16 02:33:01.791702 containerd[1580]: time="2026-04-16T02:33:01.791695523Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 16 02:33:01.791736 containerd[1580]: time="2026-04-16T02:33:01.791729591Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 16 02:33:01.791763 containerd[1580]: time="2026-04-16T02:33:01.791758088Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 16 02:33:01.791791 containerd[1580]: time="2026-04-16T02:33:01.791785209Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 16 02:33:01.791833 containerd[1580]: time="2026-04-16T02:33:01.791827021Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 16 02:33:01.791861 containerd[1580]: time="2026-04-16T02:33:01.791855589Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 16 02:33:01.791887 containerd[1580]: time="2026-04-16T02:33:01.791881578Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 16 02:33:01.791911 containerd[1580]: time="2026-04-16T02:33:01.791906413Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 16 02:33:01.791940 containerd[1580]: time="2026-04-16T02:33:01.791934429Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 16 02:33:01.792049 containerd[1580]: time="2026-04-16T02:33:01.792040847Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 16 02:33:01.792095 containerd[1580]: time="2026-04-16T02:33:01.792088110Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 16 02:33:01.792126 containerd[1580]: time="2026-04-16T02:33:01.792120008Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 16 02:33:01.792153 containerd[1580]: time="2026-04-16T02:33:01.792147529Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 16 02:33:01.792178 containerd[1580]: time="2026-04-16T02:33:01.792172992Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 16 02:33:01.792203 containerd[1580]: time="2026-04-16T02:33:01.792198011Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 16 02:33:01.792370 containerd[1580]: time="2026-04-16T02:33:01.792274519Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 16 02:33:01.792370 containerd[1580]: time="2026-04-16T02:33:01.792285319Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 16 02:33:01.792370 containerd[1580]: time="2026-04-16T02:33:01.792294253Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 16 02:33:01.792370 containerd[1580]: time="2026-04-16T02:33:01.792302325Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 16 02:33:01.792370 containerd[1580]: time="2026-04-16T02:33:01.792309922Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 16 02:33:01.792370 containerd[1580]: time="2026-04-16T02:33:01.792344138Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 16 02:33:01.792370 containerd[1580]: time="2026-04-16T02:33:01.792354271Z" level=info msg="Start snapshots syncer" Apr 16 02:33:01.792522 containerd[1580]: time="2026-04-16T02:33:01.792513239Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 16 02:33:01.792829 containerd[1580]: time="2026-04-16T02:33:01.792800279Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 16 02:33:01.793012 containerd[1580]: time="2026-04-16T02:33:01.792996107Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 16 02:33:01.793163 containerd[1580]: time="2026-04-16T02:33:01.793110123Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 16 02:33:01.793313 containerd[1580]: time="2026-04-16T02:33:01.793302342Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 16 02:33:01.793356 containerd[1580]: time="2026-04-16T02:33:01.793349985Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 16 02:33:01.793384 containerd[1580]: time="2026-04-16T02:33:01.793378265Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 16 02:33:01.793410 containerd[1580]: time="2026-04-16T02:33:01.793404016Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 16 02:33:01.793442 containerd[1580]: time="2026-04-16T02:33:01.793434417Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 16 02:33:01.793478 containerd[1580]: time="2026-04-16T02:33:01.793469258Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 16 02:33:01.793638 containerd[1580]: time="2026-04-16T02:33:01.793550851Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 16 02:33:01.793638 containerd[1580]: time="2026-04-16T02:33:01.793597377Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 16 02:33:01.793726 containerd[1580]: time="2026-04-16T02:33:01.793715891Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 16 02:33:01.793770 containerd[1580]: time="2026-04-16T02:33:01.793760545Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 16 02:33:01.793864 containerd[1580]: time="2026-04-16T02:33:01.793852638Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 02:33:01.793999 containerd[1580]: time="2026-04-16T02:33:01.793987216Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 02:33:01.794040 containerd[1580]: time="2026-04-16T02:33:01.794032715Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 02:33:01.794087 containerd[1580]: time="2026-04-16T02:33:01.794076052Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 02:33:01.794120 containerd[1580]: time="2026-04-16T02:33:01.794113090Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 16 02:33:01.794157 containerd[1580]: time="2026-04-16T02:33:01.794149845Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 16 02:33:01.794199 containerd[1580]: time="2026-04-16T02:33:01.794191218Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 16 02:33:01.794469 containerd[1580]: time="2026-04-16T02:33:01.794421934Z" level=info msg="runtime interface created" Apr 16 02:33:01.794469 containerd[1580]: time="2026-04-16T02:33:01.794441505Z" level=info msg="created NRI interface" Apr 16 02:33:01.794469 containerd[1580]: time="2026-04-16T02:33:01.794461625Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 16 02:33:01.794629 containerd[1580]: time="2026-04-16T02:33:01.794478987Z" level=info msg="Connect containerd service" Apr 16 02:33:01.794629 containerd[1580]: time="2026-04-16T02:33:01.794539567Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 16 02:33:01.796174 containerd[1580]: time="2026-04-16T02:33:01.796121806Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 02:33:01.806280 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 16 02:33:01.810275 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 16 02:33:01.814710 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 16 02:33:01.818590 systemd[1]: Reached target getty.target - Login Prompts. Apr 16 02:33:01.878658 containerd[1580]: time="2026-04-16T02:33:01.878478290Z" level=info msg="Start subscribing containerd event" Apr 16 02:33:01.878658 containerd[1580]: time="2026-04-16T02:33:01.878614472Z" level=info msg="Start recovering state" Apr 16 02:33:01.878804 containerd[1580]: time="2026-04-16T02:33:01.878627605Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 16 02:33:01.878804 containerd[1580]: time="2026-04-16T02:33:01.878698852Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 16 02:33:01.878804 containerd[1580]: time="2026-04-16T02:33:01.878799934Z" level=info msg="Start event monitor" Apr 16 02:33:01.878891 containerd[1580]: time="2026-04-16T02:33:01.878825092Z" level=info msg="Start cni network conf syncer for default" Apr 16 02:33:01.878891 containerd[1580]: time="2026-04-16T02:33:01.878831823Z" level=info msg="Start streaming server" Apr 16 02:33:01.878891 containerd[1580]: time="2026-04-16T02:33:01.878840585Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 16 02:33:01.878891 containerd[1580]: time="2026-04-16T02:33:01.878860356Z" level=info msg="runtime interface starting up..." Apr 16 02:33:01.879054 containerd[1580]: time="2026-04-16T02:33:01.879006181Z" level=info msg="starting plugins..." Apr 16 02:33:01.879054 containerd[1580]: time="2026-04-16T02:33:01.879030176Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 16 02:33:01.879279 systemd[1]: Started containerd.service - containerd container runtime. Apr 16 02:33:01.883867 containerd[1580]: time="2026-04-16T02:33:01.883384957Z" level=info msg="containerd successfully booted in 0.116208s" Apr 16 02:33:01.906796 tar[1577]: linux-amd64/README.md Apr 16 02:33:01.927572 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 16 02:33:03.082637 systemd-networkd[1491]: eth0: Gained IPv6LL Apr 16 02:33:03.085778 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 16 02:33:03.088708 systemd[1]: Reached target network-online.target - Network is Online. Apr 16 02:33:03.091899 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 16 02:33:03.096199 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:33:03.107761 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 16 02:33:03.132525 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 16 02:33:03.135650 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 16 02:33:03.135959 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 16 02:33:03.139945 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 16 02:33:04.300632 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:33:04.304459 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 16 02:33:04.307515 systemd[1]: Startup finished in 3.813s (kernel) + 5.661s (initrd) + 5.067s (userspace) = 14.542s. Apr 16 02:33:04.318877 (kubelet)[1681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 02:33:04.916370 kubelet[1681]: E0416 02:33:04.916307 1681 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 02:33:04.920120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 02:33:04.920348 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 02:33:04.920988 systemd[1]: kubelet.service: Consumed 1.081s CPU time, 256.2M memory peak. Apr 16 02:33:07.474148 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 16 02:33:07.475144 systemd[1]: Started sshd@0-10.0.0.48:22-10.0.0.1:42640.service - OpenSSH per-connection server daemon (10.0.0.1:42640). Apr 16 02:33:07.547802 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 42640 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:33:07.549619 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:33:07.555058 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 16 02:33:07.555781 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 16 02:33:07.560305 systemd-logind[1562]: New session 1 of user core. Apr 16 02:33:07.574131 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 16 02:33:07.576318 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 16 02:33:07.590442 (systemd)[1700]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 16 02:33:07.593436 systemd-logind[1562]: New session c1 of user core. Apr 16 02:33:07.695095 systemd[1700]: Queued start job for default target default.target. Apr 16 02:33:07.707129 systemd[1700]: Created slice app.slice - User Application Slice. Apr 16 02:33:07.707172 systemd[1700]: Reached target paths.target - Paths. Apr 16 02:33:07.707387 systemd[1700]: Reached target timers.target - Timers. Apr 16 02:33:07.708480 systemd[1700]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 16 02:33:07.717850 systemd[1700]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 16 02:33:07.718027 systemd[1700]: Reached target sockets.target - Sockets. Apr 16 02:33:07.718071 systemd[1700]: Reached target basic.target - Basic System. Apr 16 02:33:07.718095 systemd[1700]: Reached target default.target - Main User Target. Apr 16 02:33:07.718112 systemd[1700]: Startup finished in 119ms. Apr 16 02:33:07.718257 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 16 02:33:07.719361 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 16 02:33:07.729506 systemd[1]: Started sshd@1-10.0.0.48:22-10.0.0.1:42654.service - OpenSSH per-connection server daemon (10.0.0.1:42654). Apr 16 02:33:07.786810 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 42654 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:33:07.788034 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:33:07.793389 systemd-logind[1562]: New session 2 of user core. Apr 16 02:33:07.803599 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 16 02:33:07.815455 sshd[1714]: Connection closed by 10.0.0.1 port 42654 Apr 16 02:33:07.815986 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Apr 16 02:33:07.829268 systemd[1]: sshd@1-10.0.0.48:22-10.0.0.1:42654.service: Deactivated successfully. Apr 16 02:33:07.830652 systemd[1]: session-2.scope: Deactivated successfully. Apr 16 02:33:07.831696 systemd-logind[1562]: Session 2 logged out. Waiting for processes to exit. Apr 16 02:33:07.833095 systemd[1]: Started sshd@2-10.0.0.48:22-10.0.0.1:42666.service - OpenSSH per-connection server daemon (10.0.0.1:42666). Apr 16 02:33:07.833779 systemd-logind[1562]: Removed session 2. Apr 16 02:33:07.894083 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 42666 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:33:07.895084 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:33:07.901160 systemd-logind[1562]: New session 3 of user core. Apr 16 02:33:07.911610 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 16 02:33:07.934026 sshd[1723]: Connection closed by 10.0.0.1 port 42666 Apr 16 02:33:07.962527 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Apr 16 02:33:07.971043 systemd[1]: sshd@2-10.0.0.48:22-10.0.0.1:42666.service: Deactivated successfully. Apr 16 02:33:07.972471 systemd[1]: session-3.scope: Deactivated successfully. Apr 16 02:33:07.973554 systemd-logind[1562]: Session 3 logged out. Waiting for processes to exit. Apr 16 02:33:07.975193 systemd[1]: Started sshd@3-10.0.0.48:22-10.0.0.1:42676.service - OpenSSH per-connection server daemon (10.0.0.1:42676). Apr 16 02:33:07.976012 systemd-logind[1562]: Removed session 3. Apr 16 02:33:08.030845 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 42676 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:33:08.032167 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:33:08.036625 systemd-logind[1562]: New session 4 of user core. Apr 16 02:33:08.046428 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 16 02:33:08.056562 sshd[1733]: Connection closed by 10.0.0.1 port 42676 Apr 16 02:33:08.056910 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Apr 16 02:33:08.068009 systemd[1]: sshd@3-10.0.0.48:22-10.0.0.1:42676.service: Deactivated successfully. Apr 16 02:33:08.069161 systemd[1]: session-4.scope: Deactivated successfully. Apr 16 02:33:08.069762 systemd-logind[1562]: Session 4 logged out. Waiting for processes to exit. Apr 16 02:33:08.071298 systemd[1]: Started sshd@4-10.0.0.48:22-10.0.0.1:42684.service - OpenSSH per-connection server daemon (10.0.0.1:42684). Apr 16 02:33:08.072283 systemd-logind[1562]: Removed session 4. Apr 16 02:33:08.122405 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 42684 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:33:08.123632 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:33:08.127524 systemd-logind[1562]: New session 5 of user core. Apr 16 02:33:08.141504 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 16 02:33:08.155760 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 16 02:33:08.155954 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 02:33:08.169037 sudo[1743]: pam_unix(sudo:session): session closed for user root Apr 16 02:33:08.170453 sshd[1742]: Connection closed by 10.0.0.1 port 42684 Apr 16 02:33:08.170708 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Apr 16 02:33:08.185151 systemd[1]: sshd@4-10.0.0.48:22-10.0.0.1:42684.service: Deactivated successfully. Apr 16 02:33:08.186338 systemd[1]: session-5.scope: Deactivated successfully. Apr 16 02:33:08.186888 systemd-logind[1562]: Session 5 logged out. Waiting for processes to exit. Apr 16 02:33:08.188415 systemd[1]: Started sshd@5-10.0.0.48:22-10.0.0.1:42698.service - OpenSSH per-connection server daemon (10.0.0.1:42698). Apr 16 02:33:08.189076 systemd-logind[1562]: Removed session 5. Apr 16 02:33:08.234966 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 42698 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:33:08.236743 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:33:08.240910 systemd-logind[1562]: New session 6 of user core. Apr 16 02:33:08.255453 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 16 02:33:08.264992 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 16 02:33:08.265181 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 02:33:08.267946 sudo[1754]: pam_unix(sudo:session): session closed for user root Apr 16 02:33:08.271783 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 16 02:33:08.271968 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 02:33:08.279548 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 02:33:08.312452 augenrules[1776]: No rules Apr 16 02:33:08.313455 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 02:33:08.313640 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 02:33:08.314348 sudo[1753]: pam_unix(sudo:session): session closed for user root Apr 16 02:33:08.315569 sshd[1752]: Connection closed by 10.0.0.1 port 42698 Apr 16 02:33:08.315824 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Apr 16 02:33:08.329168 systemd[1]: sshd@5-10.0.0.48:22-10.0.0.1:42698.service: Deactivated successfully. Apr 16 02:33:08.330307 systemd[1]: session-6.scope: Deactivated successfully. Apr 16 02:33:08.330878 systemd-logind[1562]: Session 6 logged out. Waiting for processes to exit. Apr 16 02:33:08.332509 systemd[1]: Started sshd@6-10.0.0.48:22-10.0.0.1:42704.service - OpenSSH per-connection server daemon (10.0.0.1:42704). Apr 16 02:33:08.332965 systemd-logind[1562]: Removed session 6. Apr 16 02:33:08.381645 sshd[1785]: Accepted publickey for core from 10.0.0.1 port 42704 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:33:08.382865 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:33:08.386570 systemd-logind[1562]: New session 7 of user core. Apr 16 02:33:08.396599 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 16 02:33:08.405200 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 16 02:33:08.405445 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 02:33:08.663773 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 16 02:33:08.677520 (dockerd)[1809]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 16 02:33:08.863369 dockerd[1809]: time="2026-04-16T02:33:08.863069383Z" level=info msg="Starting up" Apr 16 02:33:08.864027 dockerd[1809]: time="2026-04-16T02:33:08.863958274Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 16 02:33:08.874937 dockerd[1809]: time="2026-04-16T02:33:08.874902046Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 16 02:33:08.991677 dockerd[1809]: time="2026-04-16T02:33:08.991500011Z" level=info msg="Loading containers: start." Apr 16 02:33:09.002248 kernel: Initializing XFRM netlink socket Apr 16 02:33:09.232654 systemd-networkd[1491]: docker0: Link UP Apr 16 02:33:09.237500 dockerd[1809]: time="2026-04-16T02:33:09.237431966Z" level=info msg="Loading containers: done." Apr 16 02:33:09.249586 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1362317033-merged.mount: Deactivated successfully. Apr 16 02:33:09.252014 dockerd[1809]: time="2026-04-16T02:33:09.251932321Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 16 02:33:09.252119 dockerd[1809]: time="2026-04-16T02:33:09.252067788Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 16 02:33:09.252180 dockerd[1809]: time="2026-04-16T02:33:09.252156445Z" level=info msg="Initializing buildkit" Apr 16 02:33:09.279638 dockerd[1809]: time="2026-04-16T02:33:09.279563296Z" level=info msg="Completed buildkit initialization" Apr 16 02:33:09.287175 dockerd[1809]: time="2026-04-16T02:33:09.287108005Z" level=info msg="Daemon has completed initialization" Apr 16 02:33:09.287381 dockerd[1809]: time="2026-04-16T02:33:09.287266949Z" level=info msg="API listen on /run/docker.sock" Apr 16 02:33:09.287492 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 16 02:33:09.747819 containerd[1580]: time="2026-04-16T02:33:09.747325508Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 16 02:33:10.297129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4116795807.mount: Deactivated successfully. Apr 16 02:33:11.052353 containerd[1580]: time="2026-04-16T02:33:11.052286179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:11.053067 containerd[1580]: time="2026-04-16T02:33:11.052989410Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27099952" Apr 16 02:33:11.054144 containerd[1580]: time="2026-04-16T02:33:11.054097978Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:11.056673 containerd[1580]: time="2026-04-16T02:33:11.056595641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:11.057696 containerd[1580]: time="2026-04-16T02:33:11.057666570Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 1.310305092s" Apr 16 02:33:11.057762 containerd[1580]: time="2026-04-16T02:33:11.057702170Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 16 02:33:11.058353 containerd[1580]: time="2026-04-16T02:33:11.058327721Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 16 02:33:11.861335 containerd[1580]: time="2026-04-16T02:33:11.861271761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:11.861873 containerd[1580]: time="2026-04-16T02:33:11.861831702Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252670" Apr 16 02:33:11.862887 containerd[1580]: time="2026-04-16T02:33:11.862828223Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:11.865039 containerd[1580]: time="2026-04-16T02:33:11.864994762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:11.866032 containerd[1580]: time="2026-04-16T02:33:11.866008266Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 807.652177ms" Apr 16 02:33:11.866067 containerd[1580]: time="2026-04-16T02:33:11.866039772Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 16 02:33:11.866524 containerd[1580]: time="2026-04-16T02:33:11.866448783Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 16 02:33:12.602464 containerd[1580]: time="2026-04-16T02:33:12.602329594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:12.603263 containerd[1580]: time="2026-04-16T02:33:12.603236397Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810823" Apr 16 02:33:12.604122 containerd[1580]: time="2026-04-16T02:33:12.604086603Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:12.607171 containerd[1580]: time="2026-04-16T02:33:12.606991125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:12.608044 containerd[1580]: time="2026-04-16T02:33:12.607997480Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 741.504978ms" Apr 16 02:33:12.608130 containerd[1580]: time="2026-04-16T02:33:12.608045748Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 16 02:33:12.608686 containerd[1580]: time="2026-04-16T02:33:12.608626186Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 16 02:33:13.316526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2778398689.mount: Deactivated successfully. Apr 16 02:33:13.526068 containerd[1580]: time="2026-04-16T02:33:13.525970812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:13.526864 containerd[1580]: time="2026-04-16T02:33:13.526801866Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972848" Apr 16 02:33:13.528096 containerd[1580]: time="2026-04-16T02:33:13.528020957Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:13.530341 containerd[1580]: time="2026-04-16T02:33:13.530282297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:13.530664 containerd[1580]: time="2026-04-16T02:33:13.530613644Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 921.964406ms" Apr 16 02:33:13.530664 containerd[1580]: time="2026-04-16T02:33:13.530648694Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 16 02:33:13.531258 containerd[1580]: time="2026-04-16T02:33:13.531202946Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 16 02:33:14.044051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3594773203.mount: Deactivated successfully. Apr 16 02:33:14.665566 containerd[1580]: time="2026-04-16T02:33:14.665477905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:14.666156 containerd[1580]: time="2026-04-16T02:33:14.666120930Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22387483" Apr 16 02:33:14.667079 containerd[1580]: time="2026-04-16T02:33:14.667014068Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:14.671905 containerd[1580]: time="2026-04-16T02:33:14.671829531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:14.672928 containerd[1580]: time="2026-04-16T02:33:14.672886810Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.141628987s" Apr 16 02:33:14.672967 containerd[1580]: time="2026-04-16T02:33:14.672929711Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 16 02:33:14.673465 containerd[1580]: time="2026-04-16T02:33:14.673420879Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 16 02:33:15.077342 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 16 02:33:15.080532 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:33:15.082383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1143290779.mount: Deactivated successfully. Apr 16 02:33:15.089595 containerd[1580]: time="2026-04-16T02:33:15.089540876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:15.090193 containerd[1580]: time="2026-04-16T02:33:15.090104428Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 16 02:33:15.091235 containerd[1580]: time="2026-04-16T02:33:15.091005218Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:15.093666 containerd[1580]: time="2026-04-16T02:33:15.093640328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:15.094922 containerd[1580]: time="2026-04-16T02:33:15.094889749Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 421.440053ms" Apr 16 02:33:15.094922 containerd[1580]: time="2026-04-16T02:33:15.094922295Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 16 02:33:15.096432 containerd[1580]: time="2026-04-16T02:33:15.096403563Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 16 02:33:15.237376 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:33:15.257733 (kubelet)[2168]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 02:33:15.307646 kubelet[2168]: E0416 02:33:15.307574 2168 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 02:33:15.310786 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 02:33:15.311191 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 02:33:15.311650 systemd[1]: kubelet.service: Consumed 176ms CPU time, 111M memory peak. Apr 16 02:33:15.617902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4250975358.mount: Deactivated successfully. Apr 16 02:33:16.335730 containerd[1580]: time="2026-04-16T02:33:16.335649673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:16.336272 containerd[1580]: time="2026-04-16T02:33:16.336241080Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874255" Apr 16 02:33:16.337329 containerd[1580]: time="2026-04-16T02:33:16.337298540Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:16.340534 containerd[1580]: time="2026-04-16T02:33:16.340487604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:16.342121 containerd[1580]: time="2026-04-16T02:33:16.342041758Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.245601806s" Apr 16 02:33:16.342121 containerd[1580]: time="2026-04-16T02:33:16.342083576Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 16 02:33:18.997479 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:33:18.997618 systemd[1]: kubelet.service: Consumed 176ms CPU time, 111M memory peak. Apr 16 02:33:18.999602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:33:19.026270 systemd[1]: Reload requested from client PID 2270 ('systemctl') (unit session-7.scope)... Apr 16 02:33:19.026298 systemd[1]: Reloading... Apr 16 02:33:19.105460 zram_generator::config[2311]: No configuration found. Apr 16 02:33:19.277534 systemd[1]: Reloading finished in 250 ms. Apr 16 02:33:19.338012 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 16 02:33:19.338152 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 16 02:33:19.338427 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:33:19.338464 systemd[1]: kubelet.service: Consumed 98ms CPU time, 98.2M memory peak. Apr 16 02:33:19.340395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:33:19.535009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:33:19.545671 (kubelet)[2361]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 02:33:19.595568 kubelet[2361]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 02:33:19.595568 kubelet[2361]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 02:33:19.596201 kubelet[2361]: I0416 02:33:19.595594 2361 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 02:33:20.261090 kubelet[2361]: I0416 02:33:20.260966 2361 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 16 02:33:20.261090 kubelet[2361]: I0416 02:33:20.261107 2361 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 02:33:20.263122 kubelet[2361]: I0416 02:33:20.263068 2361 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 02:33:20.263122 kubelet[2361]: I0416 02:33:20.263113 2361 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 02:33:20.263437 kubelet[2361]: I0416 02:33:20.263408 2361 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 02:33:20.412244 kubelet[2361]: E0416 02:33:20.412148 2361 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 02:33:20.412244 kubelet[2361]: I0416 02:33:20.412174 2361 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 02:33:20.416853 kubelet[2361]: I0416 02:33:20.416815 2361 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 02:33:20.422667 kubelet[2361]: I0416 02:33:20.422183 2361 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 02:33:20.424166 kubelet[2361]: I0416 02:33:20.423872 2361 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 02:33:20.424315 kubelet[2361]: I0416 02:33:20.424154 2361 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 02:33:20.424461 kubelet[2361]: I0416 02:33:20.424320 2361 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 02:33:20.424461 kubelet[2361]: I0416 02:33:20.424328 2361 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 02:33:20.424461 kubelet[2361]: I0416 02:33:20.424416 2361 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 02:33:20.427723 kubelet[2361]: I0416 02:33:20.427492 2361 state_mem.go:36] "Initialized new in-memory state store" Apr 16 02:33:20.427859 kubelet[2361]: I0416 02:33:20.427759 2361 kubelet.go:475] "Attempting to sync node with API server" Apr 16 02:33:20.427859 kubelet[2361]: I0416 02:33:20.427779 2361 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 02:33:20.427859 kubelet[2361]: I0416 02:33:20.427802 2361 kubelet.go:387] "Adding apiserver pod source" Apr 16 02:33:20.427859 kubelet[2361]: I0416 02:33:20.427815 2361 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 02:33:20.428256 kubelet[2361]: E0416 02:33:20.428192 2361 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 02:33:20.429985 kubelet[2361]: I0416 02:33:20.429630 2361 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 02:33:20.429985 kubelet[2361]: E0416 02:33:20.429633 2361 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 02:33:20.430199 kubelet[2361]: I0416 02:33:20.430169 2361 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 02:33:20.430279 kubelet[2361]: I0416 02:33:20.430255 2361 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 02:33:20.430379 kubelet[2361]: W0416 02:33:20.430302 2361 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 16 02:33:20.434654 kubelet[2361]: I0416 02:33:20.434469 2361 server.go:1262] "Started kubelet" Apr 16 02:33:20.434826 kubelet[2361]: I0416 02:33:20.434791 2361 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 02:33:20.434885 kubelet[2361]: I0416 02:33:20.434858 2361 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 02:33:20.435278 kubelet[2361]: I0416 02:33:20.435241 2361 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 02:33:20.435370 kubelet[2361]: I0416 02:33:20.435309 2361 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 02:33:20.436033 kubelet[2361]: I0416 02:33:20.435676 2361 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 02:33:20.438815 kubelet[2361]: I0416 02:33:20.437733 2361 server.go:310] "Adding debug handlers to kubelet server" Apr 16 02:33:20.438815 kubelet[2361]: I0416 02:33:20.438689 2361 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 02:33:20.439949 kubelet[2361]: E0416 02:33:20.439896 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:33:20.439949 kubelet[2361]: I0416 02:33:20.439904 2361 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 16 02:33:20.440027 kubelet[2361]: I0416 02:33:20.439913 2361 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 02:33:20.440058 kubelet[2361]: I0416 02:33:20.440040 2361 reconciler.go:29] "Reconciler: start to sync state" Apr 16 02:33:20.440277 kubelet[2361]: E0416 02:33:20.440175 2361 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 02:33:20.440545 kubelet[2361]: E0416 02:33:20.440357 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="200ms" Apr 16 02:33:20.440601 kubelet[2361]: I0416 02:33:20.440545 2361 factory.go:223] Registration of the systemd container factory successfully Apr 16 02:33:20.440634 kubelet[2361]: I0416 02:33:20.440614 2361 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 02:33:20.440634 kubelet[2361]: E0416 02:33:20.438368 2361 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.48:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.48:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6b59fac015188 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 02:33:20.43443444 +0000 UTC m=+0.884726478,LastTimestamp:2026-04-16 02:33:20.43443444 +0000 UTC m=+0.884726478,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 02:33:20.440927 kubelet[2361]: E0416 02:33:20.440888 2361 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 02:33:20.441451 kubelet[2361]: I0416 02:33:20.441413 2361 factory.go:223] Registration of the containerd container factory successfully Apr 16 02:33:20.451780 kubelet[2361]: I0416 02:33:20.451556 2361 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 02:33:20.451780 kubelet[2361]: I0416 02:33:20.451572 2361 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 02:33:20.451780 kubelet[2361]: I0416 02:33:20.451586 2361 state_mem.go:36] "Initialized new in-memory state store" Apr 16 02:33:20.454152 kubelet[2361]: I0416 02:33:20.454013 2361 policy_none.go:49] "None policy: Start" Apr 16 02:33:20.454152 kubelet[2361]: I0416 02:33:20.454167 2361 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 02:33:20.454391 kubelet[2361]: I0416 02:33:20.454187 2361 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 02:33:20.455049 kubelet[2361]: I0416 02:33:20.455022 2361 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 02:33:20.456521 kubelet[2361]: I0416 02:33:20.456490 2361 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 02:33:20.456521 kubelet[2361]: I0416 02:33:20.456517 2361 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 16 02:33:20.456603 kubelet[2361]: I0416 02:33:20.456535 2361 kubelet.go:2428] "Starting kubelet main sync loop" Apr 16 02:33:20.456603 kubelet[2361]: E0416 02:33:20.456561 2361 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 02:33:20.457874 kubelet[2361]: E0416 02:33:20.456963 2361 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 02:33:20.457874 kubelet[2361]: I0416 02:33:20.457263 2361 policy_none.go:47] "Start" Apr 16 02:33:20.463564 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 16 02:33:20.479795 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 16 02:33:20.482795 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 16 02:33:20.490345 kubelet[2361]: E0416 02:33:20.490311 2361 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 02:33:20.490499 kubelet[2361]: I0416 02:33:20.490481 2361 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 02:33:20.490533 kubelet[2361]: I0416 02:33:20.490508 2361 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 02:33:20.490707 kubelet[2361]: I0416 02:33:20.490692 2361 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 02:33:20.491710 kubelet[2361]: E0416 02:33:20.491604 2361 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 02:33:20.491948 kubelet[2361]: E0416 02:33:20.491929 2361 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 02:33:20.568393 systemd[1]: Created slice kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice - libcontainer container kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice. Apr 16 02:33:20.580127 kubelet[2361]: E0416 02:33:20.580091 2361 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:33:20.582980 systemd[1]: Created slice kubepods-burstable-podb22310a12b7bc6870ecc0fbdd998c7fd.slice - libcontainer container kubepods-burstable-podb22310a12b7bc6870ecc0fbdd998c7fd.slice. Apr 16 02:33:20.591920 kubelet[2361]: I0416 02:33:20.591854 2361 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 02:33:20.592209 kubelet[2361]: E0416 02:33:20.592160 2361 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Apr 16 02:33:20.598711 kubelet[2361]: E0416 02:33:20.598668 2361 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:33:20.601468 systemd[1]: Created slice kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice - libcontainer container kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice. Apr 16 02:33:20.602939 kubelet[2361]: E0416 02:33:20.602874 2361 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:33:20.640820 kubelet[2361]: I0416 02:33:20.640760 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b22310a12b7bc6870ecc0fbdd998c7fd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b22310a12b7bc6870ecc0fbdd998c7fd\") " pod="kube-system/kube-apiserver-localhost" Apr 16 02:33:20.640820 kubelet[2361]: I0416 02:33:20.640807 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:33:20.640820 kubelet[2361]: E0416 02:33:20.640810 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="400ms" Apr 16 02:33:20.640820 kubelet[2361]: I0416 02:33:20.640823 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:33:20.640820 kubelet[2361]: I0416 02:33:20.640839 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:33:20.641121 kubelet[2361]: I0416 02:33:20.640852 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b22310a12b7bc6870ecc0fbdd998c7fd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b22310a12b7bc6870ecc0fbdd998c7fd\") " pod="kube-system/kube-apiserver-localhost" Apr 16 02:33:20.641121 kubelet[2361]: I0416 02:33:20.640862 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b22310a12b7bc6870ecc0fbdd998c7fd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b22310a12b7bc6870ecc0fbdd998c7fd\") " pod="kube-system/kube-apiserver-localhost" Apr 16 02:33:20.641121 kubelet[2361]: I0416 02:33:20.640872 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:33:20.641121 kubelet[2361]: I0416 02:33:20.640882 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:33:20.641121 kubelet[2361]: I0416 02:33:20.640893 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 16 02:33:20.794936 kubelet[2361]: I0416 02:33:20.794831 2361 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 02:33:20.795705 kubelet[2361]: E0416 02:33:20.795612 2361 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Apr 16 02:33:20.884740 containerd[1580]: time="2026-04-16T02:33:20.884594797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,}" Apr 16 02:33:20.903757 containerd[1580]: time="2026-04-16T02:33:20.903698763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b22310a12b7bc6870ecc0fbdd998c7fd,Namespace:kube-system,Attempt:0,}" Apr 16 02:33:20.905811 containerd[1580]: time="2026-04-16T02:33:20.905741911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,}" Apr 16 02:33:21.042756 kubelet[2361]: E0416 02:33:21.042628 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="800ms" Apr 16 02:33:21.197941 kubelet[2361]: I0416 02:33:21.197765 2361 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 02:33:21.198187 kubelet[2361]: E0416 02:33:21.198135 2361 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Apr 16 02:33:21.317150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1736396477.mount: Deactivated successfully. Apr 16 02:33:21.326067 containerd[1580]: time="2026-04-16T02:33:21.325976658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 02:33:21.326908 containerd[1580]: time="2026-04-16T02:33:21.326811962Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 16 02:33:21.330916 containerd[1580]: time="2026-04-16T02:33:21.330845116Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 02:33:21.333155 containerd[1580]: time="2026-04-16T02:33:21.333004881Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 02:33:21.334257 containerd[1580]: time="2026-04-16T02:33:21.334177609Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 16 02:33:21.335516 containerd[1580]: time="2026-04-16T02:33:21.335424272Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 02:33:21.337180 containerd[1580]: time="2026-04-16T02:33:21.337100979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 02:33:21.337786 containerd[1580]: time="2026-04-16T02:33:21.337701962Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 450.751088ms" Apr 16 02:33:21.337876 containerd[1580]: time="2026-04-16T02:33:21.337816318Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 16 02:33:21.341619 containerd[1580]: time="2026-04-16T02:33:21.341394984Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 435.718775ms" Apr 16 02:33:21.345108 containerd[1580]: time="2026-04-16T02:33:21.344988561Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 437.057336ms" Apr 16 02:33:21.370557 containerd[1580]: time="2026-04-16T02:33:21.370473752Z" level=info msg="connecting to shim 7cb12f3068ba95d05f81127d1faf71527757adedc05ebde276c74dae75e74022" address="unix:///run/containerd/s/d3c7498efdf641dedf1c0f9a055ec5240e5f6707f962fa3ee52c28ee713a59f0" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:33:21.375371 containerd[1580]: time="2026-04-16T02:33:21.375330071Z" level=info msg="connecting to shim 14bc628c6ad9aede0f8bde3cbab1d8485d69499553f5bcef5540a899140a2dc1" address="unix:///run/containerd/s/4f99bacadd46809c9d8f335e6c86183e5c6d003660fa609cd3a71af94b8b8fbd" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:33:21.387346 containerd[1580]: time="2026-04-16T02:33:21.387196461Z" level=info msg="connecting to shim b0272492d42a4de0ccc13c4e98fe2ea551967cc3afa89d31d6d06f0ae92dab53" address="unix:///run/containerd/s/8fc2137c9ed7e01f461edec258a7780fbff864fc7a9e82fd0e6908e817921310" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:33:21.399514 systemd[1]: Started cri-containerd-14bc628c6ad9aede0f8bde3cbab1d8485d69499553f5bcef5540a899140a2dc1.scope - libcontainer container 14bc628c6ad9aede0f8bde3cbab1d8485d69499553f5bcef5540a899140a2dc1. Apr 16 02:33:21.401037 systemd[1]: Started cri-containerd-7cb12f3068ba95d05f81127d1faf71527757adedc05ebde276c74dae75e74022.scope - libcontainer container 7cb12f3068ba95d05f81127d1faf71527757adedc05ebde276c74dae75e74022. Apr 16 02:33:21.421725 systemd[1]: Started cri-containerd-b0272492d42a4de0ccc13c4e98fe2ea551967cc3afa89d31d6d06f0ae92dab53.scope - libcontainer container b0272492d42a4de0ccc13c4e98fe2ea551967cc3afa89d31d6d06f0ae92dab53. Apr 16 02:33:21.471841 containerd[1580]: time="2026-04-16T02:33:21.471652350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b22310a12b7bc6870ecc0fbdd998c7fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cb12f3068ba95d05f81127d1faf71527757adedc05ebde276c74dae75e74022\"" Apr 16 02:33:21.481947 containerd[1580]: time="2026-04-16T02:33:21.481805365Z" level=info msg="CreateContainer within sandbox \"7cb12f3068ba95d05f81127d1faf71527757adedc05ebde276c74dae75e74022\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 16 02:33:21.482087 containerd[1580]: time="2026-04-16T02:33:21.481973278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"14bc628c6ad9aede0f8bde3cbab1d8485d69499553f5bcef5540a899140a2dc1\"" Apr 16 02:33:21.483054 containerd[1580]: time="2026-04-16T02:33:21.483029990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0272492d42a4de0ccc13c4e98fe2ea551967cc3afa89d31d6d06f0ae92dab53\"" Apr 16 02:33:21.485598 containerd[1580]: time="2026-04-16T02:33:21.485551334Z" level=info msg="CreateContainer within sandbox \"14bc628c6ad9aede0f8bde3cbab1d8485d69499553f5bcef5540a899140a2dc1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 16 02:33:21.490598 containerd[1580]: time="2026-04-16T02:33:21.490529455Z" level=info msg="Container bf3531d91edcc0773eddacb9861012bb63475583ca5a638e3a3e1743fe288175: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:33:21.495361 containerd[1580]: time="2026-04-16T02:33:21.494747411Z" level=info msg="CreateContainer within sandbox \"b0272492d42a4de0ccc13c4e98fe2ea551967cc3afa89d31d6d06f0ae92dab53\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 16 02:33:21.502167 kubelet[2361]: E0416 02:33:21.502112 2361 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 02:33:21.502502 containerd[1580]: time="2026-04-16T02:33:21.502472572Z" level=info msg="Container 2a8dc7ecde3546ad821ccff6e1471b31085557a46fb515e53b29b1a3d6f1ef83: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:33:21.504553 containerd[1580]: time="2026-04-16T02:33:21.504501380Z" level=info msg="CreateContainer within sandbox \"7cb12f3068ba95d05f81127d1faf71527757adedc05ebde276c74dae75e74022\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bf3531d91edcc0773eddacb9861012bb63475583ca5a638e3a3e1743fe288175\"" Apr 16 02:33:21.505128 containerd[1580]: time="2026-04-16T02:33:21.505101068Z" level=info msg="StartContainer for \"bf3531d91edcc0773eddacb9861012bb63475583ca5a638e3a3e1743fe288175\"" Apr 16 02:33:21.506086 containerd[1580]: time="2026-04-16T02:33:21.506062512Z" level=info msg="connecting to shim bf3531d91edcc0773eddacb9861012bb63475583ca5a638e3a3e1743fe288175" address="unix:///run/containerd/s/d3c7498efdf641dedf1c0f9a055ec5240e5f6707f962fa3ee52c28ee713a59f0" protocol=ttrpc version=3 Apr 16 02:33:21.508762 containerd[1580]: time="2026-04-16T02:33:21.508682375Z" level=info msg="Container 54fb6fa2f74a0b02b5ea8d668f97048cff22faae48eeee483ee2c29073640557: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:33:21.511757 containerd[1580]: time="2026-04-16T02:33:21.511726960Z" level=info msg="CreateContainer within sandbox \"14bc628c6ad9aede0f8bde3cbab1d8485d69499553f5bcef5540a899140a2dc1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2a8dc7ecde3546ad821ccff6e1471b31085557a46fb515e53b29b1a3d6f1ef83\"" Apr 16 02:33:21.512806 containerd[1580]: time="2026-04-16T02:33:21.512772309Z" level=info msg="StartContainer for \"2a8dc7ecde3546ad821ccff6e1471b31085557a46fb515e53b29b1a3d6f1ef83\"" Apr 16 02:33:21.513686 containerd[1580]: time="2026-04-16T02:33:21.513639568Z" level=info msg="connecting to shim 2a8dc7ecde3546ad821ccff6e1471b31085557a46fb515e53b29b1a3d6f1ef83" address="unix:///run/containerd/s/4f99bacadd46809c9d8f335e6c86183e5c6d003660fa609cd3a71af94b8b8fbd" protocol=ttrpc version=3 Apr 16 02:33:21.514414 containerd[1580]: time="2026-04-16T02:33:21.514365200Z" level=info msg="CreateContainer within sandbox \"b0272492d42a4de0ccc13c4e98fe2ea551967cc3afa89d31d6d06f0ae92dab53\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"54fb6fa2f74a0b02b5ea8d668f97048cff22faae48eeee483ee2c29073640557\"" Apr 16 02:33:21.514810 containerd[1580]: time="2026-04-16T02:33:21.514774234Z" level=info msg="StartContainer for \"54fb6fa2f74a0b02b5ea8d668f97048cff22faae48eeee483ee2c29073640557\"" Apr 16 02:33:21.515653 containerd[1580]: time="2026-04-16T02:33:21.515586894Z" level=info msg="connecting to shim 54fb6fa2f74a0b02b5ea8d668f97048cff22faae48eeee483ee2c29073640557" address="unix:///run/containerd/s/8fc2137c9ed7e01f461edec258a7780fbff864fc7a9e82fd0e6908e817921310" protocol=ttrpc version=3 Apr 16 02:33:21.519728 kubelet[2361]: E0416 02:33:21.519679 2361 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 02:33:21.528483 systemd[1]: Started cri-containerd-bf3531d91edcc0773eddacb9861012bb63475583ca5a638e3a3e1743fe288175.scope - libcontainer container bf3531d91edcc0773eddacb9861012bb63475583ca5a638e3a3e1743fe288175. Apr 16 02:33:21.534513 systemd[1]: Started cri-containerd-2a8dc7ecde3546ad821ccff6e1471b31085557a46fb515e53b29b1a3d6f1ef83.scope - libcontainer container 2a8dc7ecde3546ad821ccff6e1471b31085557a46fb515e53b29b1a3d6f1ef83. Apr 16 02:33:21.536286 systemd[1]: Started cri-containerd-54fb6fa2f74a0b02b5ea8d668f97048cff22faae48eeee483ee2c29073640557.scope - libcontainer container 54fb6fa2f74a0b02b5ea8d668f97048cff22faae48eeee483ee2c29073640557. Apr 16 02:33:21.561429 kubelet[2361]: E0416 02:33:21.561381 2361 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 02:33:21.579244 containerd[1580]: time="2026-04-16T02:33:21.579027096Z" level=info msg="StartContainer for \"bf3531d91edcc0773eddacb9861012bb63475583ca5a638e3a3e1743fe288175\" returns successfully" Apr 16 02:33:21.588554 containerd[1580]: time="2026-04-16T02:33:21.588470409Z" level=info msg="StartContainer for \"54fb6fa2f74a0b02b5ea8d668f97048cff22faae48eeee483ee2c29073640557\" returns successfully" Apr 16 02:33:21.604435 containerd[1580]: time="2026-04-16T02:33:21.604389417Z" level=info msg="StartContainer for \"2a8dc7ecde3546ad821ccff6e1471b31085557a46fb515e53b29b1a3d6f1ef83\" returns successfully" Apr 16 02:33:22.002262 kubelet[2361]: I0416 02:33:22.002168 2361 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 02:33:22.467948 kubelet[2361]: E0416 02:33:22.467784 2361 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:33:22.472849 kubelet[2361]: E0416 02:33:22.471827 2361 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:33:22.474040 kubelet[2361]: E0416 02:33:22.474016 2361 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:33:22.747528 kubelet[2361]: E0416 02:33:22.747411 2361 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 16 02:33:22.938297 kubelet[2361]: I0416 02:33:22.938187 2361 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 16 02:33:22.938297 kubelet[2361]: E0416 02:33:22.938266 2361 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 16 02:33:22.941106 kubelet[2361]: I0416 02:33:22.941026 2361 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 02:33:22.947287 kubelet[2361]: E0416 02:33:22.947201 2361 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 16 02:33:22.947287 kubelet[2361]: I0416 02:33:22.947272 2361 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 02:33:22.952759 kubelet[2361]: E0416 02:33:22.952702 2361 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 16 02:33:22.952759 kubelet[2361]: I0416 02:33:22.952750 2361 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 02:33:22.957781 kubelet[2361]: E0416 02:33:22.957390 2361 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 16 02:33:23.430619 kubelet[2361]: I0416 02:33:23.430562 2361 apiserver.go:52] "Watching apiserver" Apr 16 02:33:23.440429 kubelet[2361]: I0416 02:33:23.440371 2361 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 02:33:23.476842 kubelet[2361]: I0416 02:33:23.476688 2361 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 02:33:23.477016 kubelet[2361]: I0416 02:33:23.476971 2361 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 02:33:23.479272 kubelet[2361]: E0416 02:33:23.479200 2361 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 16 02:33:23.479652 kubelet[2361]: E0416 02:33:23.479634 2361 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 16 02:33:25.030345 systemd[1]: Reload requested from client PID 2647 ('systemctl') (unit session-7.scope)... Apr 16 02:33:25.030367 systemd[1]: Reloading... Apr 16 02:33:25.087254 zram_generator::config[2690]: No configuration found. Apr 16 02:33:25.330583 systemd[1]: Reloading finished in 299 ms. Apr 16 02:33:25.364990 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:33:25.388133 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 02:33:25.388492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:33:25.388591 systemd[1]: kubelet.service: Consumed 1.297s CPU time, 124.7M memory peak. Apr 16 02:33:25.390717 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:33:25.541160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:33:25.548725 (kubelet)[2735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 02:33:25.593935 kubelet[2735]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 02:33:25.593935 kubelet[2735]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 02:33:25.593935 kubelet[2735]: I0416 02:33:25.593787 2735 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 02:33:25.601623 kubelet[2735]: I0416 02:33:25.601562 2735 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 16 02:33:25.601623 kubelet[2735]: I0416 02:33:25.601601 2735 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 02:33:25.601802 kubelet[2735]: I0416 02:33:25.601644 2735 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 02:33:25.601802 kubelet[2735]: I0416 02:33:25.601656 2735 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 02:33:25.601998 kubelet[2735]: I0416 02:33:25.601977 2735 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 02:33:25.603316 kubelet[2735]: I0416 02:33:25.603299 2735 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 16 02:33:25.605473 kubelet[2735]: I0416 02:33:25.605181 2735 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 02:33:25.608085 kubelet[2735]: I0416 02:33:25.608043 2735 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 02:33:25.610718 kubelet[2735]: I0416 02:33:25.610669 2735 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 02:33:25.610844 kubelet[2735]: I0416 02:33:25.610786 2735 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 02:33:25.610968 kubelet[2735]: I0416 02:33:25.610833 2735 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 02:33:25.610968 kubelet[2735]: I0416 02:33:25.610961 2735 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 02:33:25.610968 kubelet[2735]: I0416 02:33:25.610968 2735 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 02:33:25.611121 kubelet[2735]: I0416 02:33:25.610983 2735 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 02:33:25.611121 kubelet[2735]: I0416 02:33:25.611107 2735 state_mem.go:36] "Initialized new in-memory state store" Apr 16 02:33:25.611243 kubelet[2735]: I0416 02:33:25.611199 2735 kubelet.go:475] "Attempting to sync node with API server" Apr 16 02:33:25.611274 kubelet[2735]: I0416 02:33:25.611262 2735 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 02:33:25.611302 kubelet[2735]: I0416 02:33:25.611297 2735 kubelet.go:387] "Adding apiserver pod source" Apr 16 02:33:25.611338 kubelet[2735]: I0416 02:33:25.611323 2735 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 02:33:25.618630 kubelet[2735]: I0416 02:33:25.618438 2735 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 02:33:25.619955 kubelet[2735]: I0416 02:33:25.619056 2735 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 02:33:25.619955 kubelet[2735]: I0416 02:33:25.619090 2735 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 02:33:25.621260 kubelet[2735]: I0416 02:33:25.621240 2735 server.go:1262] "Started kubelet" Apr 16 02:33:25.622996 kubelet[2735]: I0416 02:33:25.622690 2735 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 02:33:25.622996 kubelet[2735]: I0416 02:33:25.622751 2735 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 02:33:25.622996 kubelet[2735]: I0416 02:33:25.622954 2735 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 02:33:25.623089 kubelet[2735]: I0416 02:33:25.622995 2735 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 02:33:25.623644 kubelet[2735]: I0416 02:33:25.623603 2735 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 02:33:25.624436 kubelet[2735]: I0416 02:33:25.624395 2735 server.go:310] "Adding debug handlers to kubelet server" Apr 16 02:33:25.626139 kubelet[2735]: I0416 02:33:25.625341 2735 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 16 02:33:25.626139 kubelet[2735]: I0416 02:33:25.625687 2735 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 02:33:25.626139 kubelet[2735]: I0416 02:33:25.625754 2735 reconciler.go:29] "Reconciler: start to sync state" Apr 16 02:33:25.629475 kubelet[2735]: I0416 02:33:25.629450 2735 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 02:33:25.631791 kubelet[2735]: E0416 02:33:25.631774 2735 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 02:33:25.632156 kubelet[2735]: I0416 02:33:25.632059 2735 factory.go:223] Registration of the containerd container factory successfully Apr 16 02:33:25.632283 kubelet[2735]: I0416 02:33:25.632277 2735 factory.go:223] Registration of the systemd container factory successfully Apr 16 02:33:25.632944 kubelet[2735]: I0416 02:33:25.632782 2735 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 02:33:25.636608 kubelet[2735]: I0416 02:33:25.636389 2735 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 02:33:25.646270 kubelet[2735]: I0416 02:33:25.645175 2735 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 02:33:25.646270 kubelet[2735]: I0416 02:33:25.645229 2735 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 16 02:33:25.646270 kubelet[2735]: I0416 02:33:25.645250 2735 kubelet.go:2428] "Starting kubelet main sync loop" Apr 16 02:33:25.646270 kubelet[2735]: E0416 02:33:25.645288 2735 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 02:33:25.732727 kubelet[2735]: I0416 02:33:25.732693 2735 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 02:33:25.732727 kubelet[2735]: I0416 02:33:25.732712 2735 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 02:33:25.732727 kubelet[2735]: I0416 02:33:25.732727 2735 state_mem.go:36] "Initialized new in-memory state store" Apr 16 02:33:25.732909 kubelet[2735]: I0416 02:33:25.732832 2735 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 16 02:33:25.732909 kubelet[2735]: I0416 02:33:25.732839 2735 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 16 02:33:25.732909 kubelet[2735]: I0416 02:33:25.732852 2735 policy_none.go:49] "None policy: Start" Apr 16 02:33:25.732909 kubelet[2735]: I0416 02:33:25.732859 2735 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 02:33:25.732909 kubelet[2735]: I0416 02:33:25.732865 2735 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 02:33:25.732984 kubelet[2735]: I0416 02:33:25.732921 2735 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 16 02:33:25.732984 kubelet[2735]: I0416 02:33:25.732926 2735 policy_none.go:47] "Start" Apr 16 02:33:25.740832 kubelet[2735]: E0416 02:33:25.740731 2735 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 02:33:25.740983 kubelet[2735]: I0416 02:33:25.740887 2735 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 02:33:25.740983 kubelet[2735]: I0416 02:33:25.740899 2735 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 02:33:25.741248 kubelet[2735]: I0416 02:33:25.741151 2735 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 02:33:25.744946 kubelet[2735]: E0416 02:33:25.744915 2735 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 02:33:25.746170 kubelet[2735]: I0416 02:33:25.746033 2735 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 02:33:25.746170 kubelet[2735]: I0416 02:33:25.746185 2735 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 02:33:25.748269 kubelet[2735]: I0416 02:33:25.746030 2735 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 02:33:25.848336 kubelet[2735]: I0416 02:33:25.848114 2735 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 02:33:25.860786 kubelet[2735]: I0416 02:33:25.860682 2735 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 16 02:33:25.860966 kubelet[2735]: I0416 02:33:25.860855 2735 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 16 02:33:25.927841 kubelet[2735]: I0416 02:33:25.927694 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:33:25.927841 kubelet[2735]: I0416 02:33:25.927836 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:33:25.928412 kubelet[2735]: I0416 02:33:25.927877 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 16 02:33:25.928412 kubelet[2735]: I0416 02:33:25.927901 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b22310a12b7bc6870ecc0fbdd998c7fd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b22310a12b7bc6870ecc0fbdd998c7fd\") " pod="kube-system/kube-apiserver-localhost" Apr 16 02:33:25.928412 kubelet[2735]: I0416 02:33:25.927930 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b22310a12b7bc6870ecc0fbdd998c7fd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b22310a12b7bc6870ecc0fbdd998c7fd\") " pod="kube-system/kube-apiserver-localhost" Apr 16 02:33:25.928412 kubelet[2735]: I0416 02:33:25.927959 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:33:25.928412 kubelet[2735]: I0416 02:33:25.928008 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:33:25.928626 kubelet[2735]: I0416 02:33:25.928060 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b22310a12b7bc6870ecc0fbdd998c7fd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b22310a12b7bc6870ecc0fbdd998c7fd\") " pod="kube-system/kube-apiserver-localhost" Apr 16 02:33:25.928626 kubelet[2735]: I0416 02:33:25.928082 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:33:26.613492 kubelet[2735]: I0416 02:33:26.613411 2735 apiserver.go:52] "Watching apiserver" Apr 16 02:33:26.626478 kubelet[2735]: I0416 02:33:26.626327 2735 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 02:33:26.727947 kubelet[2735]: I0416 02:33:26.727844 2735 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 02:33:26.738960 kubelet[2735]: E0416 02:33:26.738798 2735 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 16 02:33:26.749251 kubelet[2735]: I0416 02:33:26.748871 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.7488353239999999 podStartE2EDuration="1.748835324s" podCreationTimestamp="2026-04-16 02:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 02:33:26.732501067 +0000 UTC m=+1.179532548" watchObservedRunningTime="2026-04-16 02:33:26.748835324 +0000 UTC m=+1.195866811" Apr 16 02:33:26.749251 kubelet[2735]: I0416 02:33:26.749119 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.7491061829999999 podStartE2EDuration="1.749106183s" podCreationTimestamp="2026-04-16 02:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 02:33:26.748653375 +0000 UTC m=+1.195684853" watchObservedRunningTime="2026-04-16 02:33:26.749106183 +0000 UTC m=+1.196137672" Apr 16 02:33:26.764262 kubelet[2735]: I0416 02:33:26.763804 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7637682159999999 podStartE2EDuration="1.763768216s" podCreationTimestamp="2026-04-16 02:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 02:33:26.763568845 +0000 UTC m=+1.210600322" watchObservedRunningTime="2026-04-16 02:33:26.763768216 +0000 UTC m=+1.210799707" Apr 16 02:33:32.355748 kubelet[2735]: I0416 02:33:32.355558 2735 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 16 02:33:32.364623 kubelet[2735]: I0416 02:33:32.358309 2735 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 16 02:33:32.364732 containerd[1580]: time="2026-04-16T02:33:32.357736054Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 16 02:33:32.993067 systemd[1]: Created slice kubepods-besteffort-podd5cf22cd_9def_4816_a8a4_a266557b89b3.slice - libcontainer container kubepods-besteffort-podd5cf22cd_9def_4816_a8a4_a266557b89b3.slice. Apr 16 02:33:33.058337 kubelet[2735]: I0416 02:33:33.055008 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5cf22cd-9def-4816-a8a4-a266557b89b3-lib-modules\") pod \"kube-proxy-hdjtb\" (UID: \"d5cf22cd-9def-4816-a8a4-a266557b89b3\") " pod="kube-system/kube-proxy-hdjtb" Apr 16 02:33:33.058337 kubelet[2735]: I0416 02:33:33.055065 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frs56\" (UniqueName: \"kubernetes.io/projected/d5cf22cd-9def-4816-a8a4-a266557b89b3-kube-api-access-frs56\") pod \"kube-proxy-hdjtb\" (UID: \"d5cf22cd-9def-4816-a8a4-a266557b89b3\") " pod="kube-system/kube-proxy-hdjtb" Apr 16 02:33:33.058337 kubelet[2735]: I0416 02:33:33.055092 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d5cf22cd-9def-4816-a8a4-a266557b89b3-kube-proxy\") pod \"kube-proxy-hdjtb\" (UID: \"d5cf22cd-9def-4816-a8a4-a266557b89b3\") " pod="kube-system/kube-proxy-hdjtb" Apr 16 02:33:33.058337 kubelet[2735]: I0416 02:33:33.055117 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5cf22cd-9def-4816-a8a4-a266557b89b3-xtables-lock\") pod \"kube-proxy-hdjtb\" (UID: \"d5cf22cd-9def-4816-a8a4-a266557b89b3\") " pod="kube-system/kube-proxy-hdjtb" Apr 16 02:33:33.339511 containerd[1580]: time="2026-04-16T02:33:33.338685326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hdjtb,Uid:d5cf22cd-9def-4816-a8a4-a266557b89b3,Namespace:kube-system,Attempt:0,}" Apr 16 02:33:33.477558 containerd[1580]: time="2026-04-16T02:33:33.477458547Z" level=info msg="connecting to shim 482be240d2962b2f0476edb8ac98312b8d3195c93ee9bd489bf3e0429053ce21" address="unix:///run/containerd/s/8b8888198adf115279a3e28f672e978a00a76f3b79dca4a4c68293cc9f366213" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:33:33.593069 systemd[1]: Started cri-containerd-482be240d2962b2f0476edb8ac98312b8d3195c93ee9bd489bf3e0429053ce21.scope - libcontainer container 482be240d2962b2f0476edb8ac98312b8d3195c93ee9bd489bf3e0429053ce21. Apr 16 02:33:33.615834 kubelet[2735]: I0416 02:33:33.615720 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6306b869-7eae-4060-ae1e-80bc093c5a67-var-lib-calico\") pod \"tigera-operator-5588576f44-56p5j\" (UID: \"6306b869-7eae-4060-ae1e-80bc093c5a67\") " pod="tigera-operator/tigera-operator-5588576f44-56p5j" Apr 16 02:33:33.616508 kubelet[2735]: I0416 02:33:33.615927 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5hmw\" (UniqueName: \"kubernetes.io/projected/6306b869-7eae-4060-ae1e-80bc093c5a67-kube-api-access-r5hmw\") pod \"tigera-operator-5588576f44-56p5j\" (UID: \"6306b869-7eae-4060-ae1e-80bc093c5a67\") " pod="tigera-operator/tigera-operator-5588576f44-56p5j" Apr 16 02:33:33.642679 systemd[1]: Created slice kubepods-besteffort-pod6306b869_7eae_4060_ae1e_80bc093c5a67.slice - libcontainer container kubepods-besteffort-pod6306b869_7eae_4060_ae1e_80bc093c5a67.slice. Apr 16 02:33:33.696826 containerd[1580]: time="2026-04-16T02:33:33.696702116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hdjtb,Uid:d5cf22cd-9def-4816-a8a4-a266557b89b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"482be240d2962b2f0476edb8ac98312b8d3195c93ee9bd489bf3e0429053ce21\"" Apr 16 02:33:33.711491 containerd[1580]: time="2026-04-16T02:33:33.710870064Z" level=info msg="CreateContainer within sandbox \"482be240d2962b2f0476edb8ac98312b8d3195c93ee9bd489bf3e0429053ce21\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 16 02:33:33.742261 containerd[1580]: time="2026-04-16T02:33:33.740741811Z" level=info msg="Container 32e3fdb98bacc1916bd06adf4a93f24c113a6cc080fec552a7b6e2668098ad4b: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:33:33.747665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2292439401.mount: Deactivated successfully. Apr 16 02:33:33.768248 containerd[1580]: time="2026-04-16T02:33:33.767967934Z" level=info msg="CreateContainer within sandbox \"482be240d2962b2f0476edb8ac98312b8d3195c93ee9bd489bf3e0429053ce21\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"32e3fdb98bacc1916bd06adf4a93f24c113a6cc080fec552a7b6e2668098ad4b\"" Apr 16 02:33:33.769953 containerd[1580]: time="2026-04-16T02:33:33.769816806Z" level=info msg="StartContainer for \"32e3fdb98bacc1916bd06adf4a93f24c113a6cc080fec552a7b6e2668098ad4b\"" Apr 16 02:33:33.779119 containerd[1580]: time="2026-04-16T02:33:33.779020650Z" level=info msg="connecting to shim 32e3fdb98bacc1916bd06adf4a93f24c113a6cc080fec552a7b6e2668098ad4b" address="unix:///run/containerd/s/8b8888198adf115279a3e28f672e978a00a76f3b79dca4a4c68293cc9f366213" protocol=ttrpc version=3 Apr 16 02:33:33.820712 systemd[1]: Started cri-containerd-32e3fdb98bacc1916bd06adf4a93f24c113a6cc080fec552a7b6e2668098ad4b.scope - libcontainer container 32e3fdb98bacc1916bd06adf4a93f24c113a6cc080fec552a7b6e2668098ad4b. Apr 16 02:33:33.937199 containerd[1580]: time="2026-04-16T02:33:33.936591527Z" level=info msg="StartContainer for \"32e3fdb98bacc1916bd06adf4a93f24c113a6cc080fec552a7b6e2668098ad4b\" returns successfully" Apr 16 02:33:33.959359 containerd[1580]: time="2026-04-16T02:33:33.958583316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-56p5j,Uid:6306b869-7eae-4060-ae1e-80bc093c5a67,Namespace:tigera-operator,Attempt:0,}" Apr 16 02:33:34.002586 containerd[1580]: time="2026-04-16T02:33:34.001703528Z" level=info msg="connecting to shim 66b03c1daee5c81ff15a78580da09b1e2dce31f49eb06308f239f8314a4d6fc3" address="unix:///run/containerd/s/71720d32894c817d81348f2baab22a8fb3144acaf12ff7617bf7cf04efa90de4" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:33:34.099425 systemd[1]: Started cri-containerd-66b03c1daee5c81ff15a78580da09b1e2dce31f49eb06308f239f8314a4d6fc3.scope - libcontainer container 66b03c1daee5c81ff15a78580da09b1e2dce31f49eb06308f239f8314a4d6fc3. Apr 16 02:33:34.228896 containerd[1580]: time="2026-04-16T02:33:34.228655306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-56p5j,Uid:6306b869-7eae-4060-ae1e-80bc093c5a67,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"66b03c1daee5c81ff15a78580da09b1e2dce31f49eb06308f239f8314a4d6fc3\"" Apr 16 02:33:34.238391 containerd[1580]: time="2026-04-16T02:33:34.237532754Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 16 02:33:34.920528 kubelet[2735]: I0416 02:33:34.919690 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hdjtb" podStartSLOduration=2.919640213 podStartE2EDuration="2.919640213s" podCreationTimestamp="2026-04-16 02:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 02:33:34.919298481 +0000 UTC m=+9.366329971" watchObservedRunningTime="2026-04-16 02:33:34.919640213 +0000 UTC m=+9.366671691" Apr 16 02:33:35.864749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount739530995.mount: Deactivated successfully. Apr 16 02:33:36.618766 containerd[1580]: time="2026-04-16T02:33:36.618580262Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:36.619437 containerd[1580]: time="2026-04-16T02:33:36.619391264Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 16 02:33:36.621462 containerd[1580]: time="2026-04-16T02:33:36.621392794Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:36.624634 containerd[1580]: time="2026-04-16T02:33:36.624552115Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:36.625562 containerd[1580]: time="2026-04-16T02:33:36.625470503Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.387847703s" Apr 16 02:33:36.626282 containerd[1580]: time="2026-04-16T02:33:36.626196108Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 16 02:33:36.633366 containerd[1580]: time="2026-04-16T02:33:36.632800661Z" level=info msg="CreateContainer within sandbox \"66b03c1daee5c81ff15a78580da09b1e2dce31f49eb06308f239f8314a4d6fc3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 16 02:33:36.646929 containerd[1580]: time="2026-04-16T02:33:36.646854556Z" level=info msg="Container b1bec2dd68828e3a6244c91731321fc51e78fcfdbbbb51a97d698a1d00e780f4: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:33:36.651656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2702719306.mount: Deactivated successfully. Apr 16 02:33:36.657078 containerd[1580]: time="2026-04-16T02:33:36.656947306Z" level=info msg="CreateContainer within sandbox \"66b03c1daee5c81ff15a78580da09b1e2dce31f49eb06308f239f8314a4d6fc3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b1bec2dd68828e3a6244c91731321fc51e78fcfdbbbb51a97d698a1d00e780f4\"" Apr 16 02:33:36.657892 containerd[1580]: time="2026-04-16T02:33:36.657862969Z" level=info msg="StartContainer for \"b1bec2dd68828e3a6244c91731321fc51e78fcfdbbbb51a97d698a1d00e780f4\"" Apr 16 02:33:36.659265 containerd[1580]: time="2026-04-16T02:33:36.659169967Z" level=info msg="connecting to shim b1bec2dd68828e3a6244c91731321fc51e78fcfdbbbb51a97d698a1d00e780f4" address="unix:///run/containerd/s/71720d32894c817d81348f2baab22a8fb3144acaf12ff7617bf7cf04efa90de4" protocol=ttrpc version=3 Apr 16 02:33:36.682012 systemd[1]: Started cri-containerd-b1bec2dd68828e3a6244c91731321fc51e78fcfdbbbb51a97d698a1d00e780f4.scope - libcontainer container b1bec2dd68828e3a6244c91731321fc51e78fcfdbbbb51a97d698a1d00e780f4. Apr 16 02:33:36.730527 containerd[1580]: time="2026-04-16T02:33:36.730478684Z" level=info msg="StartContainer for \"b1bec2dd68828e3a6244c91731321fc51e78fcfdbbbb51a97d698a1d00e780f4\" returns successfully" Apr 16 02:33:43.015849 sudo[1789]: pam_unix(sudo:session): session closed for user root Apr 16 02:33:43.021634 sshd[1788]: Connection closed by 10.0.0.1 port 42704 Apr 16 02:33:43.024095 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Apr 16 02:33:43.033984 systemd[1]: sshd@6-10.0.0.48:22-10.0.0.1:42704.service: Deactivated successfully. Apr 16 02:33:43.049042 systemd[1]: session-7.scope: Deactivated successfully. Apr 16 02:33:43.050680 systemd[1]: session-7.scope: Consumed 5.925s CPU time, 227.6M memory peak. Apr 16 02:33:43.054122 systemd-logind[1562]: Session 7 logged out. Waiting for processes to exit. Apr 16 02:33:43.059724 systemd-logind[1562]: Removed session 7. Apr 16 02:33:46.768633 update_engine[1566]: I20260416 02:33:46.768471 1566 update_attempter.cc:509] Updating boot flags... Apr 16 02:33:47.746260 kubelet[2735]: I0416 02:33:47.745490 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-56p5j" podStartSLOduration=12.353920775 podStartE2EDuration="14.745447976s" podCreationTimestamp="2026-04-16 02:33:33 +0000 UTC" firstStartedPulling="2026-04-16 02:33:34.235952392 +0000 UTC m=+8.682983875" lastFinishedPulling="2026-04-16 02:33:36.627479594 +0000 UTC m=+11.074511076" observedRunningTime="2026-04-16 02:33:36.965543234 +0000 UTC m=+11.412574716" watchObservedRunningTime="2026-04-16 02:33:47.745447976 +0000 UTC m=+22.192479465" Apr 16 02:33:47.808729 systemd[1]: Created slice kubepods-besteffort-podb8935814_c702_423d_aec1_40d4532f7662.slice - libcontainer container kubepods-besteffort-podb8935814_c702_423d_aec1_40d4532f7662.slice. Apr 16 02:33:47.914566 kubelet[2735]: I0416 02:33:47.914442 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9trqz\" (UniqueName: \"kubernetes.io/projected/b8935814-c702-423d-aec1-40d4532f7662-kube-api-access-9trqz\") pod \"calico-typha-f7549cfc4-qm2z8\" (UID: \"b8935814-c702-423d-aec1-40d4532f7662\") " pod="calico-system/calico-typha-f7549cfc4-qm2z8" Apr 16 02:33:47.914986 kubelet[2735]: I0416 02:33:47.914826 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8935814-c702-423d-aec1-40d4532f7662-tigera-ca-bundle\") pod \"calico-typha-f7549cfc4-qm2z8\" (UID: \"b8935814-c702-423d-aec1-40d4532f7662\") " pod="calico-system/calico-typha-f7549cfc4-qm2z8" Apr 16 02:33:47.914986 kubelet[2735]: I0416 02:33:47.914907 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b8935814-c702-423d-aec1-40d4532f7662-typha-certs\") pod \"calico-typha-f7549cfc4-qm2z8\" (UID: \"b8935814-c702-423d-aec1-40d4532f7662\") " pod="calico-system/calico-typha-f7549cfc4-qm2z8" Apr 16 02:33:48.000083 systemd[1]: Created slice kubepods-besteffort-pod2af62fe7_ef60_4706_b8eb_45d8ad27e860.slice - libcontainer container kubepods-besteffort-pod2af62fe7_ef60_4706_b8eb_45d8ad27e860.slice. Apr 16 02:33:48.115863 kubelet[2735]: E0416 02:33:48.115634 2735 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2l7kg" podUID="948e86e4-5504-4096-ac1e-cb13b7489af3" Apr 16 02:33:48.120684 kubelet[2735]: I0416 02:33:48.120506 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2af62fe7-ef60-4706-b8eb-45d8ad27e860-lib-modules\") pod \"calico-node-vvz7b\" (UID: \"2af62fe7-ef60-4706-b8eb-45d8ad27e860\") " pod="calico-system/calico-node-vvz7b" Apr 16 02:33:48.123470 kubelet[2735]: I0416 02:33:48.121990 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2af62fe7-ef60-4706-b8eb-45d8ad27e860-var-run-calico\") pod \"calico-node-vvz7b\" (UID: \"2af62fe7-ef60-4706-b8eb-45d8ad27e860\") " pod="calico-system/calico-node-vvz7b" Apr 16 02:33:48.123470 kubelet[2735]: I0416 02:33:48.123043 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2af62fe7-ef60-4706-b8eb-45d8ad27e860-node-certs\") pod \"calico-node-vvz7b\" (UID: \"2af62fe7-ef60-4706-b8eb-45d8ad27e860\") " pod="calico-system/calico-node-vvz7b" Apr 16 02:33:48.123470 kubelet[2735]: I0416 02:33:48.123065 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2af62fe7-ef60-4706-b8eb-45d8ad27e860-policysync\") pod \"calico-node-vvz7b\" (UID: \"2af62fe7-ef60-4706-b8eb-45d8ad27e860\") " pod="calico-system/calico-node-vvz7b" Apr 16 02:33:48.123470 kubelet[2735]: I0416 02:33:48.123089 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/2af62fe7-ef60-4706-b8eb-45d8ad27e860-bpffs\") pod \"calico-node-vvz7b\" (UID: \"2af62fe7-ef60-4706-b8eb-45d8ad27e860\") " pod="calico-system/calico-node-vvz7b" Apr 16 02:33:48.123470 kubelet[2735]: I0416 02:33:48.123114 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2af62fe7-ef60-4706-b8eb-45d8ad27e860-cni-bin-dir\") pod \"calico-node-vvz7b\" (UID: \"2af62fe7-ef60-4706-b8eb-45d8ad27e860\") " pod="calico-system/calico-node-vvz7b" Apr 16 02:33:48.123795 kubelet[2735]: I0416 02:33:48.123132 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2af62fe7-ef60-4706-b8eb-45d8ad27e860-cni-net-dir\") pod \"calico-node-vvz7b\" (UID: \"2af62fe7-ef60-4706-b8eb-45d8ad27e860\") " pod="calico-system/calico-node-vvz7b" Apr 16 02:33:48.123795 kubelet[2735]: I0416 02:33:48.123150 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2af62fe7-ef60-4706-b8eb-45d8ad27e860-var-lib-calico\") pod \"calico-node-vvz7b\" (UID: \"2af62fe7-ef60-4706-b8eb-45d8ad27e860\") " pod="calico-system/calico-node-vvz7b" Apr 16 02:33:48.123795 kubelet[2735]: I0416 02:33:48.123174 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2af62fe7-ef60-4706-b8eb-45d8ad27e860-xtables-lock\") pod \"calico-node-vvz7b\" (UID: \"2af62fe7-ef60-4706-b8eb-45d8ad27e860\") " pod="calico-system/calico-node-vvz7b" Apr 16 02:33:48.123795 kubelet[2735]: I0416 02:33:48.123192 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2af62fe7-ef60-4706-b8eb-45d8ad27e860-flexvol-driver-host\") pod \"calico-node-vvz7b\" (UID: \"2af62fe7-ef60-4706-b8eb-45d8ad27e860\") " pod="calico-system/calico-node-vvz7b" Apr 16 02:33:48.124534 kubelet[2735]: I0416 02:33:48.124405 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/2af62fe7-ef60-4706-b8eb-45d8ad27e860-nodeproc\") pod \"calico-node-vvz7b\" (UID: \"2af62fe7-ef60-4706-b8eb-45d8ad27e860\") " pod="calico-system/calico-node-vvz7b" Apr 16 02:33:48.124940 kubelet[2735]: I0416 02:33:48.124894 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2af62fe7-ef60-4706-b8eb-45d8ad27e860-cni-log-dir\") pod \"calico-node-vvz7b\" (UID: \"2af62fe7-ef60-4706-b8eb-45d8ad27e860\") " pod="calico-system/calico-node-vvz7b" Apr 16 02:33:48.125926 kubelet[2735]: I0416 02:33:48.125823 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x92ll\" (UniqueName: \"kubernetes.io/projected/2af62fe7-ef60-4706-b8eb-45d8ad27e860-kube-api-access-x92ll\") pod \"calico-node-vvz7b\" (UID: \"2af62fe7-ef60-4706-b8eb-45d8ad27e860\") " pod="calico-system/calico-node-vvz7b" Apr 16 02:33:48.127305 kubelet[2735]: I0416 02:33:48.126196 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/2af62fe7-ef60-4706-b8eb-45d8ad27e860-sys-fs\") pod \"calico-node-vvz7b\" (UID: \"2af62fe7-ef60-4706-b8eb-45d8ad27e860\") " pod="calico-system/calico-node-vvz7b" Apr 16 02:33:48.127983 kubelet[2735]: I0416 02:33:48.127956 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2af62fe7-ef60-4706-b8eb-45d8ad27e860-tigera-ca-bundle\") pod \"calico-node-vvz7b\" (UID: \"2af62fe7-ef60-4706-b8eb-45d8ad27e860\") " pod="calico-system/calico-node-vvz7b" Apr 16 02:33:48.130502 containerd[1580]: time="2026-04-16T02:33:48.129539995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f7549cfc4-qm2z8,Uid:b8935814-c702-423d-aec1-40d4532f7662,Namespace:calico-system,Attempt:0,}" Apr 16 02:33:48.229164 kubelet[2735]: I0416 02:33:48.228882 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/948e86e4-5504-4096-ac1e-cb13b7489af3-kubelet-dir\") pod \"csi-node-driver-2l7kg\" (UID: \"948e86e4-5504-4096-ac1e-cb13b7489af3\") " pod="calico-system/csi-node-driver-2l7kg" Apr 16 02:33:48.229164 kubelet[2735]: I0416 02:33:48.228999 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/948e86e4-5504-4096-ac1e-cb13b7489af3-registration-dir\") pod \"csi-node-driver-2l7kg\" (UID: \"948e86e4-5504-4096-ac1e-cb13b7489af3\") " pod="calico-system/csi-node-driver-2l7kg" Apr 16 02:33:48.229164 kubelet[2735]: I0416 02:33:48.229014 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtcmv\" (UniqueName: \"kubernetes.io/projected/948e86e4-5504-4096-ac1e-cb13b7489af3-kube-api-access-mtcmv\") pod \"csi-node-driver-2l7kg\" (UID: \"948e86e4-5504-4096-ac1e-cb13b7489af3\") " pod="calico-system/csi-node-driver-2l7kg" Apr 16 02:33:48.230816 kubelet[2735]: I0416 02:33:48.229173 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/948e86e4-5504-4096-ac1e-cb13b7489af3-socket-dir\") pod \"csi-node-driver-2l7kg\" (UID: \"948e86e4-5504-4096-ac1e-cb13b7489af3\") " pod="calico-system/csi-node-driver-2l7kg" Apr 16 02:33:48.233496 kubelet[2735]: I0416 02:33:48.229208 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/948e86e4-5504-4096-ac1e-cb13b7489af3-varrun\") pod \"csi-node-driver-2l7kg\" (UID: \"948e86e4-5504-4096-ac1e-cb13b7489af3\") " pod="calico-system/csi-node-driver-2l7kg" Apr 16 02:33:48.263831 kubelet[2735]: E0416 02:33:48.263697 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:33:48.267915 kubelet[2735]: W0416 02:33:48.267750 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:33:48.268168 kubelet[2735]: E0416 02:33:48.268154 2735 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:33:48.284326 kubelet[2735]: E0416 02:33:48.283391 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:33:48.284730 kubelet[2735]: W0416 02:33:48.284566 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:33:48.284730 kubelet[2735]: E0416 02:33:48.284616 2735 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:33:48.290180 containerd[1580]: time="2026-04-16T02:33:48.287674889Z" level=info msg="connecting to shim a6128e3d2520e93de3eafd62fd7fa55a673452a25854503bded20821ef31e3f0" address="unix:///run/containerd/s/65ab014a7279a80dc5c975b446058f07983cab5d149e13b11dfbb04f08e9f156" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:33:48.318821 containerd[1580]: time="2026-04-16T02:33:48.318716522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vvz7b,Uid:2af62fe7-ef60-4706-b8eb-45d8ad27e860,Namespace:calico-system,Attempt:0,}" Apr 16 02:33:48.337364 kubelet[2735]: E0416 02:33:48.335396 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:33:48.337364 kubelet[2735]: W0416 02:33:48.335439 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:33:48.337364 kubelet[2735]: E0416 02:33:48.335548 2735 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:33:48.340356 kubelet[2735]: E0416 02:33:48.339510 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:33:48.340714 kubelet[2735]: W0416 02:33:48.340590 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:33:48.340714 kubelet[2735]: E0416 02:33:48.340641 2735 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:33:48.343010 kubelet[2735]: E0416 02:33:48.342743 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:33:48.344530 kubelet[2735]: W0416 02:33:48.344333 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:33:48.344530 kubelet[2735]: E0416 02:33:48.344391 2735 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:33:48.351998 kubelet[2735]: E0416 02:33:48.351439 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:33:48.351998 kubelet[2735]: W0416 02:33:48.351480 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:33:48.351998 kubelet[2735]: E0416 02:33:48.351512 2735 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:33:48.355159 kubelet[2735]: E0416 02:33:48.354481 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:33:48.355159 kubelet[2735]: W0416 02:33:48.354521 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:33:48.355159 kubelet[2735]: E0416 02:33:48.354592 2735 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:33:48.356586 kubelet[2735]: E0416 02:33:48.356130 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:33:48.356586 kubelet[2735]: W0416 02:33:48.356156 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:33:48.356586 kubelet[2735]: E0416 02:33:48.356182 2735 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:33:48.357945 kubelet[2735]: E0416 02:33:48.357895 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:33:48.358383 kubelet[2735]: W0416 02:33:48.358183 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:33:48.358790 kubelet[2735]: E0416 02:33:48.358734 2735 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:33:48.359501 kubelet[2735]: E0416 02:33:48.359467 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:33:48.359743 kubelet[2735]: W0416 02:33:48.359718 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:33:48.360311 kubelet[2735]: E0416 02:33:48.360035 2735 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:33:48.362102 kubelet[2735]: E0416 02:33:48.361917 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:33:48.366966 kubelet[2735]: W0416 02:33:48.365136 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:33:48.366966 kubelet[2735]: E0416 02:33:48.365412 2735 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:33:48.374285 kubelet[2735]: E0416 02:33:48.374139 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:33:48.375368 kubelet[2735]: W0416 02:33:48.375024 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:33:48.378502 kubelet[2735]: E0416 02:33:48.377002 2735 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:33:48.379524 kubelet[2735]: E0416 02:33:48.379474 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:33:48.380186 kubelet[2735]: W0416 02:33:48.379765 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:33:48.380186 kubelet[2735]: E0416 02:33:48.379795 2735 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:33:48.380638 systemd[1]: Started cri-containerd-a6128e3d2520e93de3eafd62fd7fa55a673452a25854503bded20821ef31e3f0.scope - libcontainer container a6128e3d2520e93de3eafd62fd7fa55a673452a25854503bded20821ef31e3f0. Apr 16 02:33:48.382936 kubelet[2735]: E0416 02:33:48.382884 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:33:48.382936 kubelet[2735]: W0416 02:33:48.382925 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:33:48.383285 kubelet[2735]: E0416 02:33:48.382960 2735 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:33:48.384045 containerd[1580]: time="2026-04-16T02:33:48.383426281Z" level=info msg="connecting to shim cfe2c4110f3effb123267128f910c0022b7793b9fdf5d4f4f7a8805b79151eb6" address="unix:///run/containerd/s/b2ff8976437234f94b556c48bdff3701900b1dd3676b4b8fee594ad9a8ca86bd" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:33:48.384561 kubelet[2735]: E0416 02:33:48.383832 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:33:48.384561 kubelet[2735]: W0416 02:33:48.383855 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:33:48.384561 kubelet[2735]: E0416 02:33:48.383882 2735 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:33:48.384691 kubelet[2735]: E0416 02:33:48.384599 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:33:48.384691 kubelet[2735]: W0416 02:33:48.384625 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:33:48.384691 kubelet[2735]: E0416 02:33:48.384651 2735 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:33:48.385804 kubelet[2735]: E0416 02:33:48.385189 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:33:48.385804 kubelet[2735]: W0416 02:33:48.385431 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:33:48.385804 kubelet[2735]: E0416 02:33:48.385483 2735 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:33:48.386167 kubelet[2735]: E0416 02:33:48.386124 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:33:48.386167 kubelet[2735]: W0416 02:33:48.386159 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:33:48.386519 kubelet[2735]: E0416 02:33:48.386178 2735 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:33:48.387069 kubelet[2735]: E0416 02:33:48.386917 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:33:48.387942 kubelet[2735]: W0416 02:33:48.387275 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:33:48.387942 kubelet[2735]: E0416 02:33:48.387613 2735 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:33:48.390359 kubelet[2735]: E0416 02:33:48.390310 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:33:48.391174 kubelet[2735]: W0416 02:33:48.390484 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:33:48.391174 kubelet[2735]: E0416 02:33:48.390507 2735 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:33:48.391573 kubelet[2735]: E0416 02:33:48.391523 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:33:48.391618 kubelet[2735]: W0416 02:33:48.391553 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:33:48.391695 kubelet[2735]: E0416 02:33:48.391618 2735 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:33:48.391984 kubelet[2735]: E0416 02:33:48.391940 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:33:48.391984 kubelet[2735]: W0416 02:33:48.391962 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:33:48.391984 kubelet[2735]: E0416 02:33:48.391972 2735 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:33:48.392299 kubelet[2735]: E0416 02:33:48.392284 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:33:48.392335 kubelet[2735]: W0416 02:33:48.392299 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:33:48.392335 kubelet[2735]: E0416 02:33:48.392310 2735 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:33:48.393088 kubelet[2735]: E0416 02:33:48.392909 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:33:48.393088 kubelet[2735]: W0416 02:33:48.393010 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:33:48.393088 kubelet[2735]: E0416 02:33:48.393035 2735 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:33:48.394978 kubelet[2735]: E0416 02:33:48.394923 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:33:48.395118 kubelet[2735]: W0416 02:33:48.394998 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:33:48.395118 kubelet[2735]: E0416 02:33:48.395023 2735 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:33:48.396503 kubelet[2735]: E0416 02:33:48.395931 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:33:48.396503 kubelet[2735]: W0416 02:33:48.396007 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:33:48.396503 kubelet[2735]: E0416 02:33:48.396032 2735 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:33:48.397705 kubelet[2735]: E0416 02:33:48.396924 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:33:48.397705 kubelet[2735]: W0416 02:33:48.396947 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:33:48.397705 kubelet[2735]: E0416 02:33:48.396964 2735 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:33:48.491624 systemd[1]: Started cri-containerd-cfe2c4110f3effb123267128f910c0022b7793b9fdf5d4f4f7a8805b79151eb6.scope - libcontainer container cfe2c4110f3effb123267128f910c0022b7793b9fdf5d4f4f7a8805b79151eb6. Apr 16 02:33:48.500940 kubelet[2735]: E0416 02:33:48.497174 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:33:48.500940 kubelet[2735]: W0416 02:33:48.497242 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:33:48.500940 kubelet[2735]: E0416 02:33:48.497299 2735 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:33:48.573697 containerd[1580]: time="2026-04-16T02:33:48.572878348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vvz7b,Uid:2af62fe7-ef60-4706-b8eb-45d8ad27e860,Namespace:calico-system,Attempt:0,} returns sandbox id \"cfe2c4110f3effb123267128f910c0022b7793b9fdf5d4f4f7a8805b79151eb6\"" Apr 16 02:33:48.585673 containerd[1580]: time="2026-04-16T02:33:48.585616055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 16 02:33:48.592152 containerd[1580]: time="2026-04-16T02:33:48.592063210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f7549cfc4-qm2z8,Uid:b8935814-c702-423d-aec1-40d4532f7662,Namespace:calico-system,Attempt:0,} returns sandbox id \"a6128e3d2520e93de3eafd62fd7fa55a673452a25854503bded20821ef31e3f0\"" Apr 16 02:33:49.649135 kubelet[2735]: E0416 02:33:49.649062 2735 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2l7kg" podUID="948e86e4-5504-4096-ac1e-cb13b7489af3" Apr 16 02:33:50.138240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3533548407.mount: Deactivated successfully. Apr 16 02:33:50.225297 containerd[1580]: time="2026-04-16T02:33:50.225133886Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:50.226293 containerd[1580]: time="2026-04-16T02:33:50.226234534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Apr 16 02:33:50.228072 containerd[1580]: time="2026-04-16T02:33:50.228009645Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:50.232578 containerd[1580]: time="2026-04-16T02:33:50.232505607Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:50.234065 containerd[1580]: time="2026-04-16T02:33:50.233797174Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.648064534s" Apr 16 02:33:50.234065 containerd[1580]: time="2026-04-16T02:33:50.233888594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 16 02:33:50.236787 containerd[1580]: time="2026-04-16T02:33:50.236736799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 16 02:33:50.243433 containerd[1580]: time="2026-04-16T02:33:50.243381246Z" level=info msg="CreateContainer within sandbox \"cfe2c4110f3effb123267128f910c0022b7793b9fdf5d4f4f7a8805b79151eb6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 16 02:33:50.255792 containerd[1580]: time="2026-04-16T02:33:50.255733625Z" level=info msg="Container b7c1ed76e66564e03b67bd03b84d5073085db529922d81c2a6212be55df3259d: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:33:50.267596 containerd[1580]: time="2026-04-16T02:33:50.267451136Z" level=info msg="CreateContainer within sandbox \"cfe2c4110f3effb123267128f910c0022b7793b9fdf5d4f4f7a8805b79151eb6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b7c1ed76e66564e03b67bd03b84d5073085db529922d81c2a6212be55df3259d\"" Apr 16 02:33:50.268150 containerd[1580]: time="2026-04-16T02:33:50.268125632Z" level=info msg="StartContainer for \"b7c1ed76e66564e03b67bd03b84d5073085db529922d81c2a6212be55df3259d\"" Apr 16 02:33:50.269436 containerd[1580]: time="2026-04-16T02:33:50.269379406Z" level=info msg="connecting to shim b7c1ed76e66564e03b67bd03b84d5073085db529922d81c2a6212be55df3259d" address="unix:///run/containerd/s/b2ff8976437234f94b556c48bdff3701900b1dd3676b4b8fee594ad9a8ca86bd" protocol=ttrpc version=3 Apr 16 02:33:50.300688 systemd[1]: Started cri-containerd-b7c1ed76e66564e03b67bd03b84d5073085db529922d81c2a6212be55df3259d.scope - libcontainer container b7c1ed76e66564e03b67bd03b84d5073085db529922d81c2a6212be55df3259d. Apr 16 02:33:50.386945 systemd[1]: cri-containerd-b7c1ed76e66564e03b67bd03b84d5073085db529922d81c2a6212be55df3259d.scope: Deactivated successfully. Apr 16 02:33:50.389368 containerd[1580]: time="2026-04-16T02:33:50.389160522Z" level=info msg="StartContainer for \"b7c1ed76e66564e03b67bd03b84d5073085db529922d81c2a6212be55df3259d\" returns successfully" Apr 16 02:33:50.390669 containerd[1580]: time="2026-04-16T02:33:50.390627564Z" level=info msg="received container exit event container_id:\"b7c1ed76e66564e03b67bd03b84d5073085db529922d81c2a6212be55df3259d\" id:\"b7c1ed76e66564e03b67bd03b84d5073085db529922d81c2a6212be55df3259d\" pid:3317 exited_at:{seconds:1776306830 nanos:390277472}" Apr 16 02:33:50.431452 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7c1ed76e66564e03b67bd03b84d5073085db529922d81c2a6212be55df3259d-rootfs.mount: Deactivated successfully. Apr 16 02:33:51.649888 kubelet[2735]: E0416 02:33:51.649812 2735 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2l7kg" podUID="948e86e4-5504-4096-ac1e-cb13b7489af3" Apr 16 02:33:53.178584 containerd[1580]: time="2026-04-16T02:33:53.178510462Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:53.179536 containerd[1580]: time="2026-04-16T02:33:53.179463047Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Apr 16 02:33:53.181160 containerd[1580]: time="2026-04-16T02:33:53.181065042Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:53.184633 containerd[1580]: time="2026-04-16T02:33:53.184546654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:33:53.185517 containerd[1580]: time="2026-04-16T02:33:53.185446260Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.948660776s" Apr 16 02:33:53.185517 containerd[1580]: time="2026-04-16T02:33:53.185492567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 16 02:33:53.187347 containerd[1580]: time="2026-04-16T02:33:53.187074823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 16 02:33:53.205056 containerd[1580]: time="2026-04-16T02:33:53.204835660Z" level=info msg="CreateContainer within sandbox \"a6128e3d2520e93de3eafd62fd7fa55a673452a25854503bded20821ef31e3f0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 16 02:33:53.223895 containerd[1580]: time="2026-04-16T02:33:53.223836847Z" level=info msg="Container b6bef2e5e91cd140ea34557b116ead0f377ac22e85314c27111c915723fa9f35: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:33:53.227818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3553946836.mount: Deactivated successfully. Apr 16 02:33:53.236871 containerd[1580]: time="2026-04-16T02:33:53.236519187Z" level=info msg="CreateContainer within sandbox \"a6128e3d2520e93de3eafd62fd7fa55a673452a25854503bded20821ef31e3f0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b6bef2e5e91cd140ea34557b116ead0f377ac22e85314c27111c915723fa9f35\"" Apr 16 02:33:53.238491 containerd[1580]: time="2026-04-16T02:33:53.238383519Z" level=info msg="StartContainer for \"b6bef2e5e91cd140ea34557b116ead0f377ac22e85314c27111c915723fa9f35\"" Apr 16 02:33:53.240149 containerd[1580]: time="2026-04-16T02:33:53.240061685Z" level=info msg="connecting to shim b6bef2e5e91cd140ea34557b116ead0f377ac22e85314c27111c915723fa9f35" address="unix:///run/containerd/s/65ab014a7279a80dc5c975b446058f07983cab5d149e13b11dfbb04f08e9f156" protocol=ttrpc version=3 Apr 16 02:33:53.284733 systemd[1]: Started cri-containerd-b6bef2e5e91cd140ea34557b116ead0f377ac22e85314c27111c915723fa9f35.scope - libcontainer container b6bef2e5e91cd140ea34557b116ead0f377ac22e85314c27111c915723fa9f35. Apr 16 02:33:53.482488 containerd[1580]: time="2026-04-16T02:33:53.482427542Z" level=info msg="StartContainer for \"b6bef2e5e91cd140ea34557b116ead0f377ac22e85314c27111c915723fa9f35\" returns successfully" Apr 16 02:33:53.648040 kubelet[2735]: E0416 02:33:53.647980 2735 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2l7kg" podUID="948e86e4-5504-4096-ac1e-cb13b7489af3" Apr 16 02:33:55.035613 kubelet[2735]: I0416 02:33:55.035529 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 02:33:55.647137 kubelet[2735]: E0416 02:33:55.646788 2735 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2l7kg" podUID="948e86e4-5504-4096-ac1e-cb13b7489af3" Apr 16 02:33:57.651285 kubelet[2735]: E0416 02:33:57.650295 2735 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2l7kg" podUID="948e86e4-5504-4096-ac1e-cb13b7489af3" Apr 16 02:33:59.649756 kubelet[2735]: E0416 02:33:59.649591 2735 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2l7kg" podUID="948e86e4-5504-4096-ac1e-cb13b7489af3" Apr 16 02:34:01.694735 kubelet[2735]: E0416 02:34:01.694539 2735 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2l7kg" podUID="948e86e4-5504-4096-ac1e-cb13b7489af3" Apr 16 02:34:03.650605 kubelet[2735]: E0416 02:34:03.649584 2735 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2l7kg" podUID="948e86e4-5504-4096-ac1e-cb13b7489af3" Apr 16 02:34:05.649365 kubelet[2735]: E0416 02:34:05.649317 2735 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2l7kg" podUID="948e86e4-5504-4096-ac1e-cb13b7489af3" Apr 16 02:34:07.140141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3206361660.mount: Deactivated successfully. Apr 16 02:34:07.438937 containerd[1580]: time="2026-04-16T02:34:07.438748799Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 16 02:34:07.446988 containerd[1580]: time="2026-04-16T02:34:07.446921396Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 14.259807532s" Apr 16 02:34:07.446988 containerd[1580]: time="2026-04-16T02:34:07.446984459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 16 02:34:07.449545 containerd[1580]: time="2026-04-16T02:34:07.449396303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:34:07.450175 containerd[1580]: time="2026-04-16T02:34:07.450121631Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:34:07.451071 containerd[1580]: time="2026-04-16T02:34:07.450940373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:34:07.454807 containerd[1580]: time="2026-04-16T02:34:07.454115338Z" level=info msg="CreateContainer within sandbox \"cfe2c4110f3effb123267128f910c0022b7793b9fdf5d4f4f7a8805b79151eb6\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 16 02:34:07.544006 containerd[1580]: time="2026-04-16T02:34:07.542274381Z" level=info msg="Container dc43f5a9e96efd95e4ed06401af66fbe6dcd73e42923a966f416d708c0120049: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:34:07.611720 containerd[1580]: time="2026-04-16T02:34:07.611560254Z" level=info msg="CreateContainer within sandbox \"cfe2c4110f3effb123267128f910c0022b7793b9fdf5d4f4f7a8805b79151eb6\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"dc43f5a9e96efd95e4ed06401af66fbe6dcd73e42923a966f416d708c0120049\"" Apr 16 02:34:07.615342 containerd[1580]: time="2026-04-16T02:34:07.613595517Z" level=info msg="StartContainer for \"dc43f5a9e96efd95e4ed06401af66fbe6dcd73e42923a966f416d708c0120049\"" Apr 16 02:34:07.618427 containerd[1580]: time="2026-04-16T02:34:07.618355237Z" level=info msg="connecting to shim dc43f5a9e96efd95e4ed06401af66fbe6dcd73e42923a966f416d708c0120049" address="unix:///run/containerd/s/b2ff8976437234f94b556c48bdff3701900b1dd3676b4b8fee594ad9a8ca86bd" protocol=ttrpc version=3 Apr 16 02:34:07.645966 kubelet[2735]: E0416 02:34:07.645909 2735 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2l7kg" podUID="948e86e4-5504-4096-ac1e-cb13b7489af3" Apr 16 02:34:07.648528 systemd[1]: Started cri-containerd-dc43f5a9e96efd95e4ed06401af66fbe6dcd73e42923a966f416d708c0120049.scope - libcontainer container dc43f5a9e96efd95e4ed06401af66fbe6dcd73e42923a966f416d708c0120049. Apr 16 02:34:07.813129 containerd[1580]: time="2026-04-16T02:34:07.813073614Z" level=info msg="StartContainer for \"dc43f5a9e96efd95e4ed06401af66fbe6dcd73e42923a966f416d708c0120049\" returns successfully" Apr 16 02:34:07.901403 systemd[1]: cri-containerd-dc43f5a9e96efd95e4ed06401af66fbe6dcd73e42923a966f416d708c0120049.scope: Deactivated successfully. Apr 16 02:34:07.916729 containerd[1580]: time="2026-04-16T02:34:07.916553358Z" level=info msg="received container exit event container_id:\"dc43f5a9e96efd95e4ed06401af66fbe6dcd73e42923a966f416d708c0120049\" id:\"dc43f5a9e96efd95e4ed06401af66fbe6dcd73e42923a966f416d708c0120049\" pid:3426 exited_at:{seconds:1776306847 nanos:907204054}" Apr 16 02:34:08.140026 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc43f5a9e96efd95e4ed06401af66fbe6dcd73e42923a966f416d708c0120049-rootfs.mount: Deactivated successfully. Apr 16 02:34:08.233287 containerd[1580]: time="2026-04-16T02:34:08.231689165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 16 02:34:08.265336 kubelet[2735]: I0416 02:34:08.264969 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f7549cfc4-qm2z8" podStartSLOduration=16.675129798 podStartE2EDuration="21.264947333s" podCreationTimestamp="2026-04-16 02:33:47 +0000 UTC" firstStartedPulling="2026-04-16 02:33:48.597092793 +0000 UTC m=+23.044124278" lastFinishedPulling="2026-04-16 02:33:53.186910336 +0000 UTC m=+27.633941813" observedRunningTime="2026-04-16 02:33:54.082973061 +0000 UTC m=+28.530004553" watchObservedRunningTime="2026-04-16 02:34:08.264947333 +0000 UTC m=+42.711978854" Apr 16 02:34:09.700989 kubelet[2735]: E0416 02:34:09.700778 2735 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2l7kg" podUID="948e86e4-5504-4096-ac1e-cb13b7489af3" Apr 16 02:34:11.670292 kubelet[2735]: E0416 02:34:11.669912 2735 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2l7kg" podUID="948e86e4-5504-4096-ac1e-cb13b7489af3" Apr 16 02:34:13.513835 containerd[1580]: time="2026-04-16T02:34:13.513444477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:34:13.518051 containerd[1580]: time="2026-04-16T02:34:13.517963180Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 16 02:34:13.532356 containerd[1580]: time="2026-04-16T02:34:13.531705461Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:34:13.591733 containerd[1580]: time="2026-04-16T02:34:13.548509804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:34:13.594461 containerd[1580]: time="2026-04-16T02:34:13.594337059Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 5.362606566s" Apr 16 02:34:13.594461 containerd[1580]: time="2026-04-16T02:34:13.594400507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 16 02:34:13.623210 containerd[1580]: time="2026-04-16T02:34:13.623162893Z" level=info msg="CreateContainer within sandbox \"cfe2c4110f3effb123267128f910c0022b7793b9fdf5d4f4f7a8805b79151eb6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 16 02:34:13.647146 containerd[1580]: time="2026-04-16T02:34:13.646536806Z" level=info msg="Container 9fe83b306a8881650464c2109a7b9876e89aae0af7d0c9f339d3a762e174e274: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:34:13.662096 kubelet[2735]: E0416 02:34:13.661961 2735 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2l7kg" podUID="948e86e4-5504-4096-ac1e-cb13b7489af3" Apr 16 02:34:13.673713 containerd[1580]: time="2026-04-16T02:34:13.673623176Z" level=info msg="CreateContainer within sandbox \"cfe2c4110f3effb123267128f910c0022b7793b9fdf5d4f4f7a8805b79151eb6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9fe83b306a8881650464c2109a7b9876e89aae0af7d0c9f339d3a762e174e274\"" Apr 16 02:34:13.677190 containerd[1580]: time="2026-04-16T02:34:13.676625252Z" level=info msg="StartContainer for \"9fe83b306a8881650464c2109a7b9876e89aae0af7d0c9f339d3a762e174e274\"" Apr 16 02:34:13.684852 containerd[1580]: time="2026-04-16T02:34:13.684443647Z" level=info msg="connecting to shim 9fe83b306a8881650464c2109a7b9876e89aae0af7d0c9f339d3a762e174e274" address="unix:///run/containerd/s/b2ff8976437234f94b556c48bdff3701900b1dd3676b4b8fee594ad9a8ca86bd" protocol=ttrpc version=3 Apr 16 02:34:13.736016 systemd[1]: Started cri-containerd-9fe83b306a8881650464c2109a7b9876e89aae0af7d0c9f339d3a762e174e274.scope - libcontainer container 9fe83b306a8881650464c2109a7b9876e89aae0af7d0c9f339d3a762e174e274. Apr 16 02:34:13.959858 containerd[1580]: time="2026-04-16T02:34:13.959652914Z" level=info msg="StartContainer for \"9fe83b306a8881650464c2109a7b9876e89aae0af7d0c9f339d3a762e174e274\" returns successfully" Apr 16 02:34:14.958884 systemd[1]: cri-containerd-9fe83b306a8881650464c2109a7b9876e89aae0af7d0c9f339d3a762e174e274.scope: Deactivated successfully. Apr 16 02:34:14.959618 systemd[1]: cri-containerd-9fe83b306a8881650464c2109a7b9876e89aae0af7d0c9f339d3a762e174e274.scope: Consumed 1.003s CPU time, 183.2M memory peak, 3.3M read from disk, 177M written to disk. Apr 16 02:34:14.963632 containerd[1580]: time="2026-04-16T02:34:14.963554115Z" level=info msg="received container exit event container_id:\"9fe83b306a8881650464c2109a7b9876e89aae0af7d0c9f339d3a762e174e274\" id:\"9fe83b306a8881650464c2109a7b9876e89aae0af7d0c9f339d3a762e174e274\" pid:3483 exited_at:{seconds:1776306854 nanos:962959197}" Apr 16 02:34:14.994049 kubelet[2735]: I0416 02:34:14.994020 2735 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 16 02:34:14.999979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fe83b306a8881650464c2109a7b9876e89aae0af7d0c9f339d3a762e174e274-rootfs.mount: Deactivated successfully. Apr 16 02:34:15.143166 systemd[1]: Created slice kubepods-burstable-pod8b236eee_56e9_410d_a6b6_5768df51bbdf.slice - libcontainer container kubepods-burstable-pod8b236eee_56e9_410d_a6b6_5768df51bbdf.slice. Apr 16 02:34:15.208981 systemd[1]: Created slice kubepods-besteffort-pod62859b98_ece2_40d6_b938_ca7d0fe4bc04.slice - libcontainer container kubepods-besteffort-pod62859b98_ece2_40d6_b938_ca7d0fe4bc04.slice. Apr 16 02:34:15.226037 systemd[1]: Created slice kubepods-burstable-pod849bda9a_9ad3_487b_a536_ce8b4d7a8312.slice - libcontainer container kubepods-burstable-pod849bda9a_9ad3_487b_a536_ce8b4d7a8312.slice. Apr 16 02:34:15.241526 systemd[1]: Created slice kubepods-besteffort-pod58ccd239_9910_4be9_8659_4bd821d1f8db.slice - libcontainer container kubepods-besteffort-pod58ccd239_9910_4be9_8659_4bd821d1f8db.slice. Apr 16 02:34:15.260058 systemd[1]: Created slice kubepods-besteffort-pod16700172_7e74_43cf_8388_58d9547167db.slice - libcontainer container kubepods-besteffort-pod16700172_7e74_43cf_8388_58d9547167db.slice. Apr 16 02:34:15.273153 systemd[1]: Created slice kubepods-besteffort-podf4ac2bdd_d2c4_4fbd_a124_1eb72e878921.slice - libcontainer container kubepods-besteffort-podf4ac2bdd_d2c4_4fbd_a124_1eb72e878921.slice. Apr 16 02:34:15.286873 systemd[1]: Created slice kubepods-besteffort-podf27621ca_dedb_4053_89fd_9f574580e12b.slice - libcontainer container kubepods-besteffort-podf27621ca_dedb_4053_89fd_9f574580e12b.slice. Apr 16 02:34:15.307786 kubelet[2735]: I0416 02:34:15.307406 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tm5b\" (UniqueName: \"kubernetes.io/projected/58ccd239-9910-4be9-8659-4bd821d1f8db-kube-api-access-6tm5b\") pod \"calico-apiserver-74fc6d9f9b-pdkgn\" (UID: \"58ccd239-9910-4be9-8659-4bd821d1f8db\") " pod="calico-system/calico-apiserver-74fc6d9f9b-pdkgn" Apr 16 02:34:15.307786 kubelet[2735]: I0416 02:34:15.307491 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f4ac2bdd-d2c4-4fbd-a124-1eb72e878921-calico-apiserver-certs\") pod \"calico-apiserver-74fc6d9f9b-xt2mx\" (UID: \"f4ac2bdd-d2c4-4fbd-a124-1eb72e878921\") " pod="calico-system/calico-apiserver-74fc6d9f9b-xt2mx" Apr 16 02:34:15.307786 kubelet[2735]: I0416 02:34:15.307518 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v4lq\" (UniqueName: \"kubernetes.io/projected/f4ac2bdd-d2c4-4fbd-a124-1eb72e878921-kube-api-access-7v4lq\") pod \"calico-apiserver-74fc6d9f9b-xt2mx\" (UID: \"f4ac2bdd-d2c4-4fbd-a124-1eb72e878921\") " pod="calico-system/calico-apiserver-74fc6d9f9b-xt2mx" Apr 16 02:34:15.307786 kubelet[2735]: I0416 02:34:15.307560 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4ttr\" (UniqueName: \"kubernetes.io/projected/849bda9a-9ad3-487b-a536-ce8b4d7a8312-kube-api-access-x4ttr\") pod \"coredns-66bc5c9577-rtc2h\" (UID: \"849bda9a-9ad3-487b-a536-ce8b4d7a8312\") " pod="kube-system/coredns-66bc5c9577-rtc2h" Apr 16 02:34:15.307786 kubelet[2735]: I0416 02:34:15.307590 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b236eee-56e9-410d-a6b6-5768df51bbdf-config-volume\") pod \"coredns-66bc5c9577-jc7c4\" (UID: \"8b236eee-56e9-410d-a6b6-5768df51bbdf\") " pod="kube-system/coredns-66bc5c9577-jc7c4" Apr 16 02:34:15.308077 kubelet[2735]: I0416 02:34:15.307680 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/849bda9a-9ad3-487b-a536-ce8b4d7a8312-config-volume\") pod \"coredns-66bc5c9577-rtc2h\" (UID: \"849bda9a-9ad3-487b-a536-ce8b4d7a8312\") " pod="kube-system/coredns-66bc5c9577-rtc2h" Apr 16 02:34:15.308077 kubelet[2735]: I0416 02:34:15.307788 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/16700172-7e74-43cf-8388-58d9547167db-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-kbk5r\" (UID: \"16700172-7e74-43cf-8388-58d9547167db\") " pod="calico-system/goldmane-cccfbd5cf-kbk5r" Apr 16 02:34:15.308077 kubelet[2735]: I0416 02:34:15.307934 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16700172-7e74-43cf-8388-58d9547167db-config\") pod \"goldmane-cccfbd5cf-kbk5r\" (UID: \"16700172-7e74-43cf-8388-58d9547167db\") " pod="calico-system/goldmane-cccfbd5cf-kbk5r" Apr 16 02:34:15.308077 kubelet[2735]: I0416 02:34:15.308004 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/58ccd239-9910-4be9-8659-4bd821d1f8db-calico-apiserver-certs\") pod \"calico-apiserver-74fc6d9f9b-pdkgn\" (UID: \"58ccd239-9910-4be9-8659-4bd821d1f8db\") " pod="calico-system/calico-apiserver-74fc6d9f9b-pdkgn" Apr 16 02:34:15.308077 kubelet[2735]: I0416 02:34:15.308029 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcgd8\" (UniqueName: \"kubernetes.io/projected/16700172-7e74-43cf-8388-58d9547167db-kube-api-access-kcgd8\") pod \"goldmane-cccfbd5cf-kbk5r\" (UID: \"16700172-7e74-43cf-8388-58d9547167db\") " pod="calico-system/goldmane-cccfbd5cf-kbk5r" Apr 16 02:34:15.308157 kubelet[2735]: I0416 02:34:15.308084 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6gkn\" (UniqueName: \"kubernetes.io/projected/62859b98-ece2-40d6-b938-ca7d0fe4bc04-kube-api-access-j6gkn\") pod \"calico-kube-controllers-7cf9694cf9-kwmsb\" (UID: \"62859b98-ece2-40d6-b938-ca7d0fe4bc04\") " pod="calico-system/calico-kube-controllers-7cf9694cf9-kwmsb" Apr 16 02:34:15.308157 kubelet[2735]: I0416 02:34:15.308109 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16700172-7e74-43cf-8388-58d9547167db-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-kbk5r\" (UID: \"16700172-7e74-43cf-8388-58d9547167db\") " pod="calico-system/goldmane-cccfbd5cf-kbk5r" Apr 16 02:34:15.308157 kubelet[2735]: I0416 02:34:15.308145 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5972\" (UniqueName: \"kubernetes.io/projected/8b236eee-56e9-410d-a6b6-5768df51bbdf-kube-api-access-q5972\") pod \"coredns-66bc5c9577-jc7c4\" (UID: \"8b236eee-56e9-410d-a6b6-5768df51bbdf\") " pod="kube-system/coredns-66bc5c9577-jc7c4" Apr 16 02:34:15.309364 kubelet[2735]: I0416 02:34:15.308197 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62859b98-ece2-40d6-b938-ca7d0fe4bc04-tigera-ca-bundle\") pod \"calico-kube-controllers-7cf9694cf9-kwmsb\" (UID: \"62859b98-ece2-40d6-b938-ca7d0fe4bc04\") " pod="calico-system/calico-kube-controllers-7cf9694cf9-kwmsb" Apr 16 02:34:15.383702 containerd[1580]: time="2026-04-16T02:34:15.383628084Z" level=info msg="CreateContainer within sandbox \"cfe2c4110f3effb123267128f910c0022b7793b9fdf5d4f4f7a8805b79151eb6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 16 02:34:15.409165 kubelet[2735]: I0416 02:34:15.409089 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f27621ca-dedb-4053-89fd-9f574580e12b-whisker-backend-key-pair\") pod \"whisker-7d75f644bd-hfshg\" (UID: \"f27621ca-dedb-4053-89fd-9f574580e12b\") " pod="calico-system/whisker-7d75f644bd-hfshg" Apr 16 02:34:15.409635 kubelet[2735]: I0416 02:34:15.409310 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f27621ca-dedb-4053-89fd-9f574580e12b-whisker-ca-bundle\") pod \"whisker-7d75f644bd-hfshg\" (UID: \"f27621ca-dedb-4053-89fd-9f574580e12b\") " pod="calico-system/whisker-7d75f644bd-hfshg" Apr 16 02:34:15.409635 kubelet[2735]: I0416 02:34:15.409429 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpxwf\" (UniqueName: \"kubernetes.io/projected/f27621ca-dedb-4053-89fd-9f574580e12b-kube-api-access-zpxwf\") pod \"whisker-7d75f644bd-hfshg\" (UID: \"f27621ca-dedb-4053-89fd-9f574580e12b\") " pod="calico-system/whisker-7d75f644bd-hfshg" Apr 16 02:34:15.409635 kubelet[2735]: I0416 02:34:15.409523 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f27621ca-dedb-4053-89fd-9f574580e12b-nginx-config\") pod \"whisker-7d75f644bd-hfshg\" (UID: \"f27621ca-dedb-4053-89fd-9f574580e12b\") " pod="calico-system/whisker-7d75f644bd-hfshg" Apr 16 02:34:15.418693 containerd[1580]: time="2026-04-16T02:34:15.418566477Z" level=info msg="Container ff7b8cf29b0b5015a6c7bb6c5e62c230c4d70ffa77823dcdeae6ce73800a47c5: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:34:15.420559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3153583938.mount: Deactivated successfully. Apr 16 02:34:15.522778 containerd[1580]: time="2026-04-16T02:34:15.521859346Z" level=info msg="CreateContainer within sandbox \"cfe2c4110f3effb123267128f910c0022b7793b9fdf5d4f4f7a8805b79151eb6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ff7b8cf29b0b5015a6c7bb6c5e62c230c4d70ffa77823dcdeae6ce73800a47c5\"" Apr 16 02:34:15.530296 containerd[1580]: time="2026-04-16T02:34:15.528809625Z" level=info msg="StartContainer for \"ff7b8cf29b0b5015a6c7bb6c5e62c230c4d70ffa77823dcdeae6ce73800a47c5\"" Apr 16 02:34:15.534357 containerd[1580]: time="2026-04-16T02:34:15.534124040Z" level=info msg="connecting to shim ff7b8cf29b0b5015a6c7bb6c5e62c230c4d70ffa77823dcdeae6ce73800a47c5" address="unix:///run/containerd/s/b2ff8976437234f94b556c48bdff3701900b1dd3676b4b8fee594ad9a8ca86bd" protocol=ttrpc version=3 Apr 16 02:34:15.538410 containerd[1580]: time="2026-04-16T02:34:15.538188071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rtc2h,Uid:849bda9a-9ad3-487b-a536-ce8b4d7a8312,Namespace:kube-system,Attempt:0,}" Apr 16 02:34:15.559186 containerd[1580]: time="2026-04-16T02:34:15.559143918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74fc6d9f9b-pdkgn,Uid:58ccd239-9910-4be9-8659-4bd821d1f8db,Namespace:calico-system,Attempt:0,}" Apr 16 02:34:15.573999 systemd[1]: Started cri-containerd-ff7b8cf29b0b5015a6c7bb6c5e62c230c4d70ffa77823dcdeae6ce73800a47c5.scope - libcontainer container ff7b8cf29b0b5015a6c7bb6c5e62c230c4d70ffa77823dcdeae6ce73800a47c5. Apr 16 02:34:15.583151 containerd[1580]: time="2026-04-16T02:34:15.582920068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-kbk5r,Uid:16700172-7e74-43cf-8388-58d9547167db,Namespace:calico-system,Attempt:0,}" Apr 16 02:34:15.585583 containerd[1580]: time="2026-04-16T02:34:15.585536958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74fc6d9f9b-xt2mx,Uid:f4ac2bdd-d2c4-4fbd-a124-1eb72e878921,Namespace:calico-system,Attempt:0,}" Apr 16 02:34:15.601286 containerd[1580]: time="2026-04-16T02:34:15.601019881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d75f644bd-hfshg,Uid:f27621ca-dedb-4053-89fd-9f574580e12b,Namespace:calico-system,Attempt:0,}" Apr 16 02:34:15.660195 systemd[1]: Created slice kubepods-besteffort-pod948e86e4_5504_4096_ac1e_cb13b7489af3.slice - libcontainer container kubepods-besteffort-pod948e86e4_5504_4096_ac1e_cb13b7489af3.slice. Apr 16 02:34:15.672324 containerd[1580]: time="2026-04-16T02:34:15.671603469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2l7kg,Uid:948e86e4-5504-4096-ac1e-cb13b7489af3,Namespace:calico-system,Attempt:0,}" Apr 16 02:34:15.766666 containerd[1580]: time="2026-04-16T02:34:15.766616926Z" level=info msg="StartContainer for \"ff7b8cf29b0b5015a6c7bb6c5e62c230c4d70ffa77823dcdeae6ce73800a47c5\" returns successfully" Apr 16 02:34:15.819948 containerd[1580]: time="2026-04-16T02:34:15.815120880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jc7c4,Uid:8b236eee-56e9-410d-a6b6-5768df51bbdf,Namespace:kube-system,Attempt:0,}" Apr 16 02:34:15.831378 containerd[1580]: time="2026-04-16T02:34:15.831193839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cf9694cf9-kwmsb,Uid:62859b98-ece2-40d6-b938-ca7d0fe4bc04,Namespace:calico-system,Attempt:0,}" Apr 16 02:34:15.920324 containerd[1580]: time="2026-04-16T02:34:15.920264771Z" level=error msg="Failed to destroy network for sandbox \"1d9c521152e8bed47672c5dc767981e111136f46da4024806004960dcd01b470\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:34:15.924144 containerd[1580]: time="2026-04-16T02:34:15.924060301Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74fc6d9f9b-xt2mx,Uid:f4ac2bdd-d2c4-4fbd-a124-1eb72e878921,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d9c521152e8bed47672c5dc767981e111136f46da4024806004960dcd01b470\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:34:15.940325 containerd[1580]: time="2026-04-16T02:34:15.939931342Z" level=error msg="Failed to destroy network for sandbox \"1db20a76a58ffcffc74124b688dd7ae0c46cfcd55cb1c62f0ba90da1bd9dbc82\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:34:15.951621 containerd[1580]: time="2026-04-16T02:34:15.951449580Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74fc6d9f9b-pdkgn,Uid:58ccd239-9910-4be9-8659-4bd821d1f8db,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1db20a76a58ffcffc74124b688dd7ae0c46cfcd55cb1c62f0ba90da1bd9dbc82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:34:15.955640 containerd[1580]: time="2026-04-16T02:34:15.955365448Z" level=error msg="Failed to destroy network for sandbox \"8ddb55c5f07216e95f8e89889935adc46396d2403e92bd3ee09658a4b34c5874\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:34:15.961582 containerd[1580]: time="2026-04-16T02:34:15.961465257Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2l7kg,Uid:948e86e4-5504-4096-ac1e-cb13b7489af3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ddb55c5f07216e95f8e89889935adc46396d2403e92bd3ee09658a4b34c5874\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:34:15.977852 kubelet[2735]: E0416 02:34:15.972260 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d9c521152e8bed47672c5dc767981e111136f46da4024806004960dcd01b470\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:34:15.977852 kubelet[2735]: E0416 02:34:15.972399 2735 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d9c521152e8bed47672c5dc767981e111136f46da4024806004960dcd01b470\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-74fc6d9f9b-xt2mx" Apr 16 02:34:15.977852 kubelet[2735]: E0416 02:34:15.972472 2735 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d9c521152e8bed47672c5dc767981e111136f46da4024806004960dcd01b470\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-74fc6d9f9b-xt2mx" Apr 16 02:34:15.979480 kubelet[2735]: E0416 02:34:15.972617 2735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74fc6d9f9b-xt2mx_calico-system(f4ac2bdd-d2c4-4fbd-a124-1eb72e878921)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74fc6d9f9b-xt2mx_calico-system(f4ac2bdd-d2c4-4fbd-a124-1eb72e878921)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d9c521152e8bed47672c5dc767981e111136f46da4024806004960dcd01b470\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-74fc6d9f9b-xt2mx" podUID="f4ac2bdd-d2c4-4fbd-a124-1eb72e878921" Apr 16 02:34:15.979480 kubelet[2735]: E0416 02:34:15.977837 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1db20a76a58ffcffc74124b688dd7ae0c46cfcd55cb1c62f0ba90da1bd9dbc82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:34:15.979480 kubelet[2735]: E0416 02:34:15.977924 2735 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1db20a76a58ffcffc74124b688dd7ae0c46cfcd55cb1c62f0ba90da1bd9dbc82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-74fc6d9f9b-pdkgn" Apr 16 02:34:15.982401 kubelet[2735]: E0416 02:34:15.972267 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ddb55c5f07216e95f8e89889935adc46396d2403e92bd3ee09658a4b34c5874\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:34:15.982401 kubelet[2735]: E0416 02:34:15.980349 2735 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1db20a76a58ffcffc74124b688dd7ae0c46cfcd55cb1c62f0ba90da1bd9dbc82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-74fc6d9f9b-pdkgn" Apr 16 02:34:15.982401 kubelet[2735]: E0416 02:34:15.980453 2735 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ddb55c5f07216e95f8e89889935adc46396d2403e92bd3ee09658a4b34c5874\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2l7kg" Apr 16 02:34:15.982401 kubelet[2735]: E0416 02:34:15.980487 2735 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ddb55c5f07216e95f8e89889935adc46396d2403e92bd3ee09658a4b34c5874\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2l7kg" Apr 16 02:34:15.982667 kubelet[2735]: E0416 02:34:15.980507 2735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74fc6d9f9b-pdkgn_calico-system(58ccd239-9910-4be9-8659-4bd821d1f8db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74fc6d9f9b-pdkgn_calico-system(58ccd239-9910-4be9-8659-4bd821d1f8db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1db20a76a58ffcffc74124b688dd7ae0c46cfcd55cb1c62f0ba90da1bd9dbc82\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-74fc6d9f9b-pdkgn" podUID="58ccd239-9910-4be9-8659-4bd821d1f8db" Apr 16 02:34:15.982667 kubelet[2735]: E0416 02:34:15.980713 2735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2l7kg_calico-system(948e86e4-5504-4096-ac1e-cb13b7489af3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2l7kg_calico-system(948e86e4-5504-4096-ac1e-cb13b7489af3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ddb55c5f07216e95f8e89889935adc46396d2403e92bd3ee09658a4b34c5874\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2l7kg" podUID="948e86e4-5504-4096-ac1e-cb13b7489af3" Apr 16 02:34:16.006934 containerd[1580]: time="2026-04-16T02:34:16.006846622Z" level=error msg="Failed to destroy network for sandbox \"e3c40d580cc2a469370e40024308928703cb7ed97c0cc56977d23e41260ee5e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:34:16.012104 containerd[1580]: time="2026-04-16T02:34:16.011721222Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-kbk5r,Uid:16700172-7e74-43cf-8388-58d9547167db,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3c40d580cc2a469370e40024308928703cb7ed97c0cc56977d23e41260ee5e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:34:16.014264 kubelet[2735]: E0416 02:34:16.012817 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3c40d580cc2a469370e40024308928703cb7ed97c0cc56977d23e41260ee5e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:34:16.014264 kubelet[2735]: E0416 02:34:16.012917 2735 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3c40d580cc2a469370e40024308928703cb7ed97c0cc56977d23e41260ee5e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-kbk5r" Apr 16 02:34:16.014264 kubelet[2735]: E0416 02:34:16.012947 2735 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3c40d580cc2a469370e40024308928703cb7ed97c0cc56977d23e41260ee5e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-kbk5r" Apr 16 02:34:16.016770 kubelet[2735]: E0416 02:34:16.013046 2735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-kbk5r_calico-system(16700172-7e74-43cf-8388-58d9547167db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-kbk5r_calico-system(16700172-7e74-43cf-8388-58d9547167db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3c40d580cc2a469370e40024308928703cb7ed97c0cc56977d23e41260ee5e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-kbk5r" podUID="16700172-7e74-43cf-8388-58d9547167db" Apr 16 02:34:16.043105 systemd[1]: run-netns-cni\x2db1a9a548\x2d6a77\x2d22a2\x2dfbf7\x2d84e71b3e9191.mount: Deactivated successfully. Apr 16 02:34:16.061045 containerd[1580]: time="2026-04-16T02:34:16.060442463Z" level=error msg="Failed to destroy network for sandbox \"aa7d04aceacd9de3f82fa831a4363493e05c2de071421e70c9f49305421b4278\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:34:16.064325 containerd[1580]: time="2026-04-16T02:34:16.064253828Z" level=error msg="Failed to destroy network for sandbox \"42b8d1faa5a52f4b91735d8395b6cfc084e180e6d18c7b7096a253c2b5bada47\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:34:16.066829 systemd[1]: run-netns-cni\x2dda0a7928\x2d0fb7\x2d55a6\x2d5433\x2d4ab539a08a35.mount: Deactivated successfully. Apr 16 02:34:16.072284 containerd[1580]: time="2026-04-16T02:34:16.071369286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d75f644bd-hfshg,Uid:f27621ca-dedb-4053-89fd-9f574580e12b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"42b8d1faa5a52f4b91735d8395b6cfc084e180e6d18c7b7096a253c2b5bada47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:34:16.072480 systemd[1]: run-netns-cni\x2d415c5c2b\x2dcb42\x2de6d3\x2da969\x2de84a3d28e07f.mount: Deactivated successfully. Apr 16 02:34:16.073503 containerd[1580]: time="2026-04-16T02:34:16.072968317Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rtc2h,Uid:849bda9a-9ad3-487b-a536-ce8b4d7a8312,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa7d04aceacd9de3f82fa831a4363493e05c2de071421e70c9f49305421b4278\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:34:16.075017 kubelet[2735]: E0416 02:34:16.074821 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42b8d1faa5a52f4b91735d8395b6cfc084e180e6d18c7b7096a253c2b5bada47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:34:16.075185 kubelet[2735]: E0416 02:34:16.075063 2735 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42b8d1faa5a52f4b91735d8395b6cfc084e180e6d18c7b7096a253c2b5bada47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d75f644bd-hfshg" Apr 16 02:34:16.075185 kubelet[2735]: E0416 02:34:16.075103 2735 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42b8d1faa5a52f4b91735d8395b6cfc084e180e6d18c7b7096a253c2b5bada47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d75f644bd-hfshg" Apr 16 02:34:16.075302 kubelet[2735]: E0416 02:34:16.075210 2735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7d75f644bd-hfshg_calico-system(f27621ca-dedb-4053-89fd-9f574580e12b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7d75f644bd-hfshg_calico-system(f27621ca-dedb-4053-89fd-9f574580e12b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42b8d1faa5a52f4b91735d8395b6cfc084e180e6d18c7b7096a253c2b5bada47\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d75f644bd-hfshg" podUID="f27621ca-dedb-4053-89fd-9f574580e12b" Apr 16 02:34:16.076312 kubelet[2735]: E0416 02:34:16.076126 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa7d04aceacd9de3f82fa831a4363493e05c2de071421e70c9f49305421b4278\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:34:16.076312 kubelet[2735]: E0416 02:34:16.076178 2735 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa7d04aceacd9de3f82fa831a4363493e05c2de071421e70c9f49305421b4278\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-rtc2h" Apr 16 02:34:16.076312 kubelet[2735]: E0416 02:34:16.076199 2735 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa7d04aceacd9de3f82fa831a4363493e05c2de071421e70c9f49305421b4278\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-rtc2h" Apr 16 02:34:16.077362 kubelet[2735]: E0416 02:34:16.076274 2735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-rtc2h_kube-system(849bda9a-9ad3-487b-a536-ce8b4d7a8312)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-rtc2h_kube-system(849bda9a-9ad3-487b-a536-ce8b4d7a8312)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa7d04aceacd9de3f82fa831a4363493e05c2de071421e70c9f49305421b4278\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-rtc2h" podUID="849bda9a-9ad3-487b-a536-ce8b4d7a8312" Apr 16 02:34:16.202097 containerd[1580]: time="2026-04-16T02:34:16.201791440Z" level=error msg="Failed to destroy network for sandbox \"889b3c810174fa53ac78b7c5ab86ab87bcbff2fb2239999b9aa7d5f8d5dd73e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:34:16.203685 systemd[1]: run-netns-cni\x2d6fca4aef\x2d7cad\x2d9691\x2d9edf\x2d097be5c9e500.mount: Deactivated successfully. Apr 16 02:34:16.205481 containerd[1580]: time="2026-04-16T02:34:16.205410534Z" level=error msg="Failed to destroy network for sandbox \"1386fa1d56a5215315a602e475daad624f0280dc0bd5b32552eb501f1b760388\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:34:16.207375 containerd[1580]: time="2026-04-16T02:34:16.207320540Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jc7c4,Uid:8b236eee-56e9-410d-a6b6-5768df51bbdf,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"889b3c810174fa53ac78b7c5ab86ab87bcbff2fb2239999b9aa7d5f8d5dd73e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:34:16.208305 kubelet[2735]: E0416 02:34:16.208202 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"889b3c810174fa53ac78b7c5ab86ab87bcbff2fb2239999b9aa7d5f8d5dd73e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:34:16.208379 kubelet[2735]: E0416 02:34:16.208346 2735 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"889b3c810174fa53ac78b7c5ab86ab87bcbff2fb2239999b9aa7d5f8d5dd73e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-jc7c4" Apr 16 02:34:16.208418 kubelet[2735]: E0416 02:34:16.208377 2735 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"889b3c810174fa53ac78b7c5ab86ab87bcbff2fb2239999b9aa7d5f8d5dd73e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-jc7c4" Apr 16 02:34:16.208522 kubelet[2735]: E0416 02:34:16.208486 2735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-jc7c4_kube-system(8b236eee-56e9-410d-a6b6-5768df51bbdf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-jc7c4_kube-system(8b236eee-56e9-410d-a6b6-5768df51bbdf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"889b3c810174fa53ac78b7c5ab86ab87bcbff2fb2239999b9aa7d5f8d5dd73e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-jc7c4" podUID="8b236eee-56e9-410d-a6b6-5768df51bbdf" Apr 16 02:34:16.210253 containerd[1580]: time="2026-04-16T02:34:16.209404220Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cf9694cf9-kwmsb,Uid:62859b98-ece2-40d6-b938-ca7d0fe4bc04,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1386fa1d56a5215315a602e475daad624f0280dc0bd5b32552eb501f1b760388\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:34:16.210870 kubelet[2735]: E0416 02:34:16.209921 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1386fa1d56a5215315a602e475daad624f0280dc0bd5b32552eb501f1b760388\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:34:16.210870 kubelet[2735]: E0416 02:34:16.210086 2735 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1386fa1d56a5215315a602e475daad624f0280dc0bd5b32552eb501f1b760388\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cf9694cf9-kwmsb" Apr 16 02:34:16.210870 kubelet[2735]: E0416 02:34:16.210121 2735 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1386fa1d56a5215315a602e475daad624f0280dc0bd5b32552eb501f1b760388\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cf9694cf9-kwmsb" Apr 16 02:34:16.210962 kubelet[2735]: E0416 02:34:16.210277 2735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7cf9694cf9-kwmsb_calico-system(62859b98-ece2-40d6-b938-ca7d0fe4bc04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7cf9694cf9-kwmsb_calico-system(62859b98-ece2-40d6-b938-ca7d0fe4bc04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1386fa1d56a5215315a602e475daad624f0280dc0bd5b32552eb501f1b760388\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cf9694cf9-kwmsb" podUID="62859b98-ece2-40d6-b938-ca7d0fe4bc04" Apr 16 02:34:16.448478 kubelet[2735]: I0416 02:34:16.447209 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vvz7b" podStartSLOduration=4.435339264 podStartE2EDuration="29.447170866s" podCreationTimestamp="2026-04-16 02:33:47 +0000 UTC" firstStartedPulling="2026-04-16 02:33:48.58474134 +0000 UTC m=+23.031772815" lastFinishedPulling="2026-04-16 02:34:13.596572942 +0000 UTC m=+48.043604417" observedRunningTime="2026-04-16 02:34:16.446658194 +0000 UTC m=+50.893689675" watchObservedRunningTime="2026-04-16 02:34:16.447170866 +0000 UTC m=+50.894202340" Apr 16 02:34:16.735590 kubelet[2735]: I0416 02:34:16.733799 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f27621ca-dedb-4053-89fd-9f574580e12b-whisker-ca-bundle\") pod \"f27621ca-dedb-4053-89fd-9f574580e12b\" (UID: \"f27621ca-dedb-4053-89fd-9f574580e12b\") " Apr 16 02:34:16.735590 kubelet[2735]: I0416 02:34:16.733953 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f27621ca-dedb-4053-89fd-9f574580e12b-whisker-backend-key-pair\") pod \"f27621ca-dedb-4053-89fd-9f574580e12b\" (UID: \"f27621ca-dedb-4053-89fd-9f574580e12b\") " Apr 16 02:34:16.735590 kubelet[2735]: I0416 02:34:16.734019 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f27621ca-dedb-4053-89fd-9f574580e12b-nginx-config\") pod \"f27621ca-dedb-4053-89fd-9f574580e12b\" (UID: \"f27621ca-dedb-4053-89fd-9f574580e12b\") " Apr 16 02:34:16.735590 kubelet[2735]: I0416 02:34:16.734032 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpxwf\" (UniqueName: \"kubernetes.io/projected/f27621ca-dedb-4053-89fd-9f574580e12b-kube-api-access-zpxwf\") pod \"f27621ca-dedb-4053-89fd-9f574580e12b\" (UID: \"f27621ca-dedb-4053-89fd-9f574580e12b\") " Apr 16 02:34:16.736288 kubelet[2735]: I0416 02:34:16.736201 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f27621ca-dedb-4053-89fd-9f574580e12b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f27621ca-dedb-4053-89fd-9f574580e12b" (UID: "f27621ca-dedb-4053-89fd-9f574580e12b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 02:34:16.738765 kubelet[2735]: I0416 02:34:16.738644 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 02:34:16.741498 kubelet[2735]: I0416 02:34:16.741454 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f27621ca-dedb-4053-89fd-9f574580e12b-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "f27621ca-dedb-4053-89fd-9f574580e12b" (UID: "f27621ca-dedb-4053-89fd-9f574580e12b"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 02:34:16.744795 kubelet[2735]: I0416 02:34:16.744647 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f27621ca-dedb-4053-89fd-9f574580e12b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f27621ca-dedb-4053-89fd-9f574580e12b" (UID: "f27621ca-dedb-4053-89fd-9f574580e12b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 02:34:16.747785 kubelet[2735]: I0416 02:34:16.747576 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f27621ca-dedb-4053-89fd-9f574580e12b-kube-api-access-zpxwf" (OuterVolumeSpecName: "kube-api-access-zpxwf") pod "f27621ca-dedb-4053-89fd-9f574580e12b" (UID: "f27621ca-dedb-4053-89fd-9f574580e12b"). InnerVolumeSpecName "kube-api-access-zpxwf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 02:34:16.835373 kubelet[2735]: I0416 02:34:16.834682 2735 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f27621ca-dedb-4053-89fd-9f574580e12b-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 16 02:34:16.835965 kubelet[2735]: I0416 02:34:16.835690 2735 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f27621ca-dedb-4053-89fd-9f574580e12b-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 16 02:34:16.835965 kubelet[2735]: I0416 02:34:16.835707 2735 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f27621ca-dedb-4053-89fd-9f574580e12b-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 16 02:34:16.835965 kubelet[2735]: I0416 02:34:16.835715 2735 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zpxwf\" (UniqueName: \"kubernetes.io/projected/f27621ca-dedb-4053-89fd-9f574580e12b-kube-api-access-zpxwf\") on node \"localhost\" DevicePath \"\"" Apr 16 02:34:17.003764 systemd[1]: run-netns-cni\x2d344bbc57\x2d3953\x2d11ca\x2dfe58\x2d1d5ce5a6af8d.mount: Deactivated successfully. Apr 16 02:34:17.003943 systemd[1]: var-lib-kubelet-pods-f27621ca\x2ddedb\x2d4053\x2d89fd\x2d9f574580e12b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzpxwf.mount: Deactivated successfully. Apr 16 02:34:17.003992 systemd[1]: var-lib-kubelet-pods-f27621ca\x2ddedb\x2d4053\x2d89fd\x2d9f574580e12b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 16 02:34:17.402606 systemd[1]: Removed slice kubepods-besteffort-podf27621ca_dedb_4053_89fd_9f574580e12b.slice - libcontainer container kubepods-besteffort-podf27621ca_dedb_4053_89fd_9f574580e12b.slice. Apr 16 02:34:17.668774 kubelet[2735]: I0416 02:34:17.668502 2735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f27621ca-dedb-4053-89fd-9f574580e12b" path="/var/lib/kubelet/pods/f27621ca-dedb-4053-89fd-9f574580e12b/volumes" Apr 16 02:34:17.673910 systemd[1]: Created slice kubepods-besteffort-podf5226165_de18_464b_ad15_e72348914624.slice - libcontainer container kubepods-besteffort-podf5226165_de18_464b_ad15_e72348914624.slice. Apr 16 02:34:17.748089 kubelet[2735]: I0416 02:34:17.747804 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f5226165-de18-464b-ad15-e72348914624-whisker-backend-key-pair\") pod \"whisker-569b69bfd9-clsvt\" (UID: \"f5226165-de18-464b-ad15-e72348914624\") " pod="calico-system/whisker-569b69bfd9-clsvt" Apr 16 02:34:17.748317 kubelet[2735]: I0416 02:34:17.748108 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f5226165-de18-464b-ad15-e72348914624-nginx-config\") pod \"whisker-569b69bfd9-clsvt\" (UID: \"f5226165-de18-464b-ad15-e72348914624\") " pod="calico-system/whisker-569b69bfd9-clsvt" Apr 16 02:34:17.748317 kubelet[2735]: I0416 02:34:17.748173 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn4b5\" (UniqueName: \"kubernetes.io/projected/f5226165-de18-464b-ad15-e72348914624-kube-api-access-rn4b5\") pod \"whisker-569b69bfd9-clsvt\" (UID: \"f5226165-de18-464b-ad15-e72348914624\") " pod="calico-system/whisker-569b69bfd9-clsvt" Apr 16 02:34:17.748317 kubelet[2735]: I0416 02:34:17.748199 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5226165-de18-464b-ad15-e72348914624-whisker-ca-bundle\") pod \"whisker-569b69bfd9-clsvt\" (UID: \"f5226165-de18-464b-ad15-e72348914624\") " pod="calico-system/whisker-569b69bfd9-clsvt" Apr 16 02:34:17.990396 containerd[1580]: time="2026-04-16T02:34:17.989651210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-569b69bfd9-clsvt,Uid:f5226165-de18-464b-ad15-e72348914624,Namespace:calico-system,Attempt:0,}" Apr 16 02:34:18.343745 systemd-networkd[1491]: cali7359436e336: Link UP Apr 16 02:34:18.346757 systemd-networkd[1491]: cali7359436e336: Gained carrier Apr 16 02:34:18.385267 containerd[1580]: 2026-04-16 02:34:18.037 [ERROR][3905] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 02:34:18.385267 containerd[1580]: 2026-04-16 02:34:18.075 [INFO][3905] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--569b69bfd9--clsvt-eth0 whisker-569b69bfd9- calico-system f5226165-de18-464b-ad15-e72348914624 956 0 2026-04-16 02:34:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:569b69bfd9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-569b69bfd9-clsvt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7359436e336 [] [] }} ContainerID="3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5" Namespace="calico-system" Pod="whisker-569b69bfd9-clsvt" WorkloadEndpoint="localhost-k8s-whisker--569b69bfd9--clsvt-" Apr 16 02:34:18.385267 containerd[1580]: 2026-04-16 02:34:18.075 [INFO][3905] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5" Namespace="calico-system" Pod="whisker-569b69bfd9-clsvt" WorkloadEndpoint="localhost-k8s-whisker--569b69bfd9--clsvt-eth0" Apr 16 02:34:18.385267 containerd[1580]: 2026-04-16 02:34:18.135 [INFO][3919] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5" HandleID="k8s-pod-network.3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5" Workload="localhost-k8s-whisker--569b69bfd9--clsvt-eth0" Apr 16 02:34:18.385559 containerd[1580]: 2026-04-16 02:34:18.202 [INFO][3919] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5" HandleID="k8s-pod-network.3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5" Workload="localhost-k8s-whisker--569b69bfd9--clsvt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000368580), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-569b69bfd9-clsvt", "timestamp":"2026-04-16 02:34:18.13578964 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000432b00)} Apr 16 02:34:18.385559 containerd[1580]: 2026-04-16 02:34:18.202 [INFO][3919] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 02:34:18.385559 containerd[1580]: 2026-04-16 02:34:18.202 [INFO][3919] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 02:34:18.385559 containerd[1580]: 2026-04-16 02:34:18.202 [INFO][3919] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 02:34:18.385559 containerd[1580]: 2026-04-16 02:34:18.209 [INFO][3919] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5" host="localhost" Apr 16 02:34:18.385559 containerd[1580]: 2026-04-16 02:34:18.221 [INFO][3919] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 02:34:18.385559 containerd[1580]: 2026-04-16 02:34:18.238 [INFO][3919] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 02:34:18.385559 containerd[1580]: 2026-04-16 02:34:18.245 [INFO][3919] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 02:34:18.385559 containerd[1580]: 2026-04-16 02:34:18.255 [INFO][3919] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 02:34:18.385559 containerd[1580]: 2026-04-16 02:34:18.256 [INFO][3919] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5" host="localhost" Apr 16 02:34:18.385896 containerd[1580]: 2026-04-16 02:34:18.261 [INFO][3919] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5 Apr 16 02:34:18.385896 containerd[1580]: 2026-04-16 02:34:18.271 [INFO][3919] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5" host="localhost" Apr 16 02:34:18.385896 containerd[1580]: 2026-04-16 02:34:18.305 [INFO][3919] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5" host="localhost" Apr 16 02:34:18.385896 containerd[1580]: 2026-04-16 02:34:18.306 [INFO][3919] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5" host="localhost" Apr 16 02:34:18.385896 containerd[1580]: 2026-04-16 02:34:18.306 [INFO][3919] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 02:34:18.385896 containerd[1580]: 2026-04-16 02:34:18.306 [INFO][3919] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5" HandleID="k8s-pod-network.3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5" Workload="localhost-k8s-whisker--569b69bfd9--clsvt-eth0" Apr 16 02:34:18.385995 containerd[1580]: 2026-04-16 02:34:18.316 [INFO][3905] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5" Namespace="calico-system" Pod="whisker-569b69bfd9-clsvt" WorkloadEndpoint="localhost-k8s-whisker--569b69bfd9--clsvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--569b69bfd9--clsvt-eth0", GenerateName:"whisker-569b69bfd9-", Namespace:"calico-system", SelfLink:"", UID:"f5226165-de18-464b-ad15-e72348914624", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 34, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"569b69bfd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-569b69bfd9-clsvt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7359436e336", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:34:18.385995 containerd[1580]: 2026-04-16 02:34:18.316 [INFO][3905] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5" Namespace="calico-system" Pod="whisker-569b69bfd9-clsvt" WorkloadEndpoint="localhost-k8s-whisker--569b69bfd9--clsvt-eth0" Apr 16 02:34:18.386377 containerd[1580]: 2026-04-16 02:34:18.316 [INFO][3905] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7359436e336 ContainerID="3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5" Namespace="calico-system" Pod="whisker-569b69bfd9-clsvt" WorkloadEndpoint="localhost-k8s-whisker--569b69bfd9--clsvt-eth0" Apr 16 02:34:18.386377 containerd[1580]: 2026-04-16 02:34:18.346 [INFO][3905] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5" Namespace="calico-system" Pod="whisker-569b69bfd9-clsvt" WorkloadEndpoint="localhost-k8s-whisker--569b69bfd9--clsvt-eth0" Apr 16 02:34:18.386407 containerd[1580]: 2026-04-16 02:34:18.352 [INFO][3905] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5" Namespace="calico-system" Pod="whisker-569b69bfd9-clsvt" WorkloadEndpoint="localhost-k8s-whisker--569b69bfd9--clsvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--569b69bfd9--clsvt-eth0", GenerateName:"whisker-569b69bfd9-", Namespace:"calico-system", SelfLink:"", UID:"f5226165-de18-464b-ad15-e72348914624", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 34, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"569b69bfd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5", Pod:"whisker-569b69bfd9-clsvt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7359436e336", MAC:"22:e2:68:20:2b:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:34:18.386477 containerd[1580]: 2026-04-16 02:34:18.378 [INFO][3905] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5" Namespace="calico-system" Pod="whisker-569b69bfd9-clsvt" WorkloadEndpoint="localhost-k8s-whisker--569b69bfd9--clsvt-eth0" Apr 16 02:34:18.502042 containerd[1580]: time="2026-04-16T02:34:18.501945432Z" level=info msg="connecting to shim 3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5" address="unix:///run/containerd/s/a6efcb075ea2f86f5220920244bf1017419f8df76628e084a1831f30e057686d" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:34:18.576844 systemd[1]: Started cri-containerd-3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5.scope - libcontainer container 3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5. Apr 16 02:34:18.617628 systemd-resolved[1494]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 02:34:18.716074 containerd[1580]: time="2026-04-16T02:34:18.715963710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-569b69bfd9-clsvt,Uid:f5226165-de18-464b-ad15-e72348914624,Namespace:calico-system,Attempt:0,} returns sandbox id \"3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5\"" Apr 16 02:34:18.725897 containerd[1580]: time="2026-04-16T02:34:18.724203691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 16 02:34:19.754859 systemd-networkd[1491]: cali7359436e336: Gained IPv6LL Apr 16 02:34:20.009845 systemd-networkd[1491]: vxlan.calico: Link UP Apr 16 02:34:20.009855 systemd-networkd[1491]: vxlan.calico: Gained carrier Apr 16 02:34:21.089767 containerd[1580]: time="2026-04-16T02:34:21.089626310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:34:21.092484 containerd[1580]: time="2026-04-16T02:34:21.092379815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 16 02:34:21.103347 containerd[1580]: time="2026-04-16T02:34:21.103254209Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:34:21.106473 containerd[1580]: time="2026-04-16T02:34:21.106377557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:34:21.110111 containerd[1580]: time="2026-04-16T02:34:21.109990328Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.383038313s" Apr 16 02:34:21.110111 containerd[1580]: time="2026-04-16T02:34:21.110061218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 16 02:34:21.145478 containerd[1580]: time="2026-04-16T02:34:21.145075601Z" level=info msg="CreateContainer within sandbox \"3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 16 02:34:21.230316 containerd[1580]: time="2026-04-16T02:34:21.230207973Z" level=info msg="Container 1978e3ff2806e5f9f2864710e18803466759b86ffc90ae42065b4da01a853a2e: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:34:21.264288 containerd[1580]: time="2026-04-16T02:34:21.264177917Z" level=info msg="CreateContainer within sandbox \"3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"1978e3ff2806e5f9f2864710e18803466759b86ffc90ae42065b4da01a853a2e\"" Apr 16 02:34:21.265655 containerd[1580]: time="2026-04-16T02:34:21.265603139Z" level=info msg="StartContainer for \"1978e3ff2806e5f9f2864710e18803466759b86ffc90ae42065b4da01a853a2e\"" Apr 16 02:34:21.269127 containerd[1580]: time="2026-04-16T02:34:21.268462677Z" level=info msg="connecting to shim 1978e3ff2806e5f9f2864710e18803466759b86ffc90ae42065b4da01a853a2e" address="unix:///run/containerd/s/a6efcb075ea2f86f5220920244bf1017419f8df76628e084a1831f30e057686d" protocol=ttrpc version=3 Apr 16 02:34:21.387442 systemd[1]: Started cri-containerd-1978e3ff2806e5f9f2864710e18803466759b86ffc90ae42065b4da01a853a2e.scope - libcontainer container 1978e3ff2806e5f9f2864710e18803466759b86ffc90ae42065b4da01a853a2e. Apr 16 02:34:21.527316 containerd[1580]: time="2026-04-16T02:34:21.527065930Z" level=info msg="StartContainer for \"1978e3ff2806e5f9f2864710e18803466759b86ffc90ae42065b4da01a853a2e\" returns successfully" Apr 16 02:34:21.534383 containerd[1580]: time="2026-04-16T02:34:21.534277310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 16 02:34:21.802643 systemd-networkd[1491]: vxlan.calico: Gained IPv6LL Apr 16 02:34:24.305935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2919001565.mount: Deactivated successfully. Apr 16 02:34:24.439965 containerd[1580]: time="2026-04-16T02:34:24.439418456Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:34:24.442396 containerd[1580]: time="2026-04-16T02:34:24.442156340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 16 02:34:24.445091 containerd[1580]: time="2026-04-16T02:34:24.444995203Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:34:24.458020 containerd[1580]: time="2026-04-16T02:34:24.457765747Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:34:24.460575 containerd[1580]: time="2026-04-16T02:34:24.460361061Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.926032266s" Apr 16 02:34:24.460575 containerd[1580]: time="2026-04-16T02:34:24.460410247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 16 02:34:24.472563 containerd[1580]: time="2026-04-16T02:34:24.472436124Z" level=info msg="CreateContainer within sandbox \"3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 16 02:34:24.493907 containerd[1580]: time="2026-04-16T02:34:24.493766517Z" level=info msg="Container b1bafd3e593e7d90bc21ae552eefd2345f80cada770969a2594c43e7f4c94303: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:34:24.514156 containerd[1580]: time="2026-04-16T02:34:24.514006246Z" level=info msg="CreateContainer within sandbox \"3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b1bafd3e593e7d90bc21ae552eefd2345f80cada770969a2594c43e7f4c94303\"" Apr 16 02:34:24.515431 containerd[1580]: time="2026-04-16T02:34:24.515394480Z" level=info msg="StartContainer for \"b1bafd3e593e7d90bc21ae552eefd2345f80cada770969a2594c43e7f4c94303\"" Apr 16 02:34:24.519061 containerd[1580]: time="2026-04-16T02:34:24.518444635Z" level=info msg="connecting to shim b1bafd3e593e7d90bc21ae552eefd2345f80cada770969a2594c43e7f4c94303" address="unix:///run/containerd/s/a6efcb075ea2f86f5220920244bf1017419f8df76628e084a1831f30e057686d" protocol=ttrpc version=3 Apr 16 02:34:24.564480 systemd[1]: Started cri-containerd-b1bafd3e593e7d90bc21ae552eefd2345f80cada770969a2594c43e7f4c94303.scope - libcontainer container b1bafd3e593e7d90bc21ae552eefd2345f80cada770969a2594c43e7f4c94303. Apr 16 02:34:24.687376 containerd[1580]: time="2026-04-16T02:34:24.687143355Z" level=info msg="StartContainer for \"b1bafd3e593e7d90bc21ae552eefd2345f80cada770969a2594c43e7f4c94303\" returns successfully" Apr 16 02:34:26.655402 containerd[1580]: time="2026-04-16T02:34:26.655314722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74fc6d9f9b-pdkgn,Uid:58ccd239-9910-4be9-8659-4bd821d1f8db,Namespace:calico-system,Attempt:0,}" Apr 16 02:34:26.657500 containerd[1580]: time="2026-04-16T02:34:26.657451261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74fc6d9f9b-xt2mx,Uid:f4ac2bdd-d2c4-4fbd-a124-1eb72e878921,Namespace:calico-system,Attempt:0,}" Apr 16 02:34:26.938935 systemd-networkd[1491]: cali900d80c0efb: Link UP Apr 16 02:34:26.940687 systemd-networkd[1491]: cali900d80c0efb: Gained carrier Apr 16 02:34:26.985284 kubelet[2735]: I0416 02:34:26.984192 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-569b69bfd9-clsvt" podStartSLOduration=4.2444880959999995 podStartE2EDuration="9.984154222s" podCreationTimestamp="2026-04-16 02:34:17 +0000 UTC" firstStartedPulling="2026-04-16 02:34:18.72291629 +0000 UTC m=+53.169947763" lastFinishedPulling="2026-04-16 02:34:24.462582407 +0000 UTC m=+58.909613889" observedRunningTime="2026-04-16 02:34:25.592160176 +0000 UTC m=+60.039191648" watchObservedRunningTime="2026-04-16 02:34:26.984154222 +0000 UTC m=+61.431185707" Apr 16 02:34:26.992151 containerd[1580]: 2026-04-16 02:34:26.717 [INFO][4339] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--74fc6d9f9b--pdkgn-eth0 calico-apiserver-74fc6d9f9b- calico-system 58ccd239-9910-4be9-8659-4bd821d1f8db 895 0 2026-04-16 02:33:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74fc6d9f9b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-74fc6d9f9b-pdkgn eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali900d80c0efb [] [] }} ContainerID="ec4938f5666a6d1a6a47de69b6c00709ad72dea48e6f5040741ffda5ebf580bc" Namespace="calico-system" Pod="calico-apiserver-74fc6d9f9b-pdkgn" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fc6d9f9b--pdkgn-" Apr 16 02:34:26.992151 containerd[1580]: 2026-04-16 02:34:26.717 [INFO][4339] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ec4938f5666a6d1a6a47de69b6c00709ad72dea48e6f5040741ffda5ebf580bc" Namespace="calico-system" Pod="calico-apiserver-74fc6d9f9b-pdkgn" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fc6d9f9b--pdkgn-eth0" Apr 16 02:34:26.992151 containerd[1580]: 2026-04-16 02:34:26.758 [INFO][4363] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec4938f5666a6d1a6a47de69b6c00709ad72dea48e6f5040741ffda5ebf580bc" HandleID="k8s-pod-network.ec4938f5666a6d1a6a47de69b6c00709ad72dea48e6f5040741ffda5ebf580bc" Workload="localhost-k8s-calico--apiserver--74fc6d9f9b--pdkgn-eth0" Apr 16 02:34:26.992727 containerd[1580]: 2026-04-16 02:34:26.776 [INFO][4363] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ec4938f5666a6d1a6a47de69b6c00709ad72dea48e6f5040741ffda5ebf580bc" HandleID="k8s-pod-network.ec4938f5666a6d1a6a47de69b6c00709ad72dea48e6f5040741ffda5ebf580bc" Workload="localhost-k8s-calico--apiserver--74fc6d9f9b--pdkgn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000366540), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-74fc6d9f9b-pdkgn", "timestamp":"2026-04-16 02:34:26.758607131 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000570f20)} Apr 16 02:34:26.992727 containerd[1580]: 2026-04-16 02:34:26.776 [INFO][4363] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 02:34:26.992727 containerd[1580]: 2026-04-16 02:34:26.776 [INFO][4363] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 02:34:26.992727 containerd[1580]: 2026-04-16 02:34:26.776 [INFO][4363] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 02:34:26.992727 containerd[1580]: 2026-04-16 02:34:26.781 [INFO][4363] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ec4938f5666a6d1a6a47de69b6c00709ad72dea48e6f5040741ffda5ebf580bc" host="localhost" Apr 16 02:34:26.992727 containerd[1580]: 2026-04-16 02:34:26.791 [INFO][4363] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 02:34:26.992727 containerd[1580]: 2026-04-16 02:34:26.805 [INFO][4363] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 02:34:26.992727 containerd[1580]: 2026-04-16 02:34:26.818 [INFO][4363] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 02:34:26.992727 containerd[1580]: 2026-04-16 02:34:26.841 [INFO][4363] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 02:34:26.992727 containerd[1580]: 2026-04-16 02:34:26.841 [INFO][4363] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ec4938f5666a6d1a6a47de69b6c00709ad72dea48e6f5040741ffda5ebf580bc" host="localhost" Apr 16 02:34:26.993580 containerd[1580]: 2026-04-16 02:34:26.848 [INFO][4363] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ec4938f5666a6d1a6a47de69b6c00709ad72dea48e6f5040741ffda5ebf580bc Apr 16 02:34:26.993580 containerd[1580]: 2026-04-16 02:34:26.907 [INFO][4363] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ec4938f5666a6d1a6a47de69b6c00709ad72dea48e6f5040741ffda5ebf580bc" host="localhost" Apr 16 02:34:26.993580 containerd[1580]: 2026-04-16 02:34:26.929 [INFO][4363] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ec4938f5666a6d1a6a47de69b6c00709ad72dea48e6f5040741ffda5ebf580bc" host="localhost" Apr 16 02:34:26.993580 containerd[1580]: 2026-04-16 02:34:26.930 [INFO][4363] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ec4938f5666a6d1a6a47de69b6c00709ad72dea48e6f5040741ffda5ebf580bc" host="localhost" Apr 16 02:34:26.993580 containerd[1580]: 2026-04-16 02:34:26.930 [INFO][4363] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 02:34:26.993580 containerd[1580]: 2026-04-16 02:34:26.930 [INFO][4363] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ec4938f5666a6d1a6a47de69b6c00709ad72dea48e6f5040741ffda5ebf580bc" HandleID="k8s-pod-network.ec4938f5666a6d1a6a47de69b6c00709ad72dea48e6f5040741ffda5ebf580bc" Workload="localhost-k8s-calico--apiserver--74fc6d9f9b--pdkgn-eth0" Apr 16 02:34:26.993952 containerd[1580]: 2026-04-16 02:34:26.933 [INFO][4339] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ec4938f5666a6d1a6a47de69b6c00709ad72dea48e6f5040741ffda5ebf580bc" Namespace="calico-system" Pod="calico-apiserver-74fc6d9f9b-pdkgn" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fc6d9f9b--pdkgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74fc6d9f9b--pdkgn-eth0", GenerateName:"calico-apiserver-74fc6d9f9b-", Namespace:"calico-system", SelfLink:"", UID:"58ccd239-9910-4be9-8659-4bd821d1f8db", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 33, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74fc6d9f9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-74fc6d9f9b-pdkgn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali900d80c0efb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:34:26.994394 containerd[1580]: 2026-04-16 02:34:26.933 [INFO][4339] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ec4938f5666a6d1a6a47de69b6c00709ad72dea48e6f5040741ffda5ebf580bc" Namespace="calico-system" Pod="calico-apiserver-74fc6d9f9b-pdkgn" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fc6d9f9b--pdkgn-eth0" Apr 16 02:34:26.994394 containerd[1580]: 2026-04-16 02:34:26.933 [INFO][4339] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali900d80c0efb ContainerID="ec4938f5666a6d1a6a47de69b6c00709ad72dea48e6f5040741ffda5ebf580bc" Namespace="calico-system" Pod="calico-apiserver-74fc6d9f9b-pdkgn" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fc6d9f9b--pdkgn-eth0" Apr 16 02:34:26.994394 containerd[1580]: 2026-04-16 02:34:26.939 [INFO][4339] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec4938f5666a6d1a6a47de69b6c00709ad72dea48e6f5040741ffda5ebf580bc" Namespace="calico-system" Pod="calico-apiserver-74fc6d9f9b-pdkgn" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fc6d9f9b--pdkgn-eth0" Apr 16 02:34:26.994826 containerd[1580]: 2026-04-16 02:34:26.943 [INFO][4339] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ec4938f5666a6d1a6a47de69b6c00709ad72dea48e6f5040741ffda5ebf580bc" Namespace="calico-system" Pod="calico-apiserver-74fc6d9f9b-pdkgn" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fc6d9f9b--pdkgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74fc6d9f9b--pdkgn-eth0", GenerateName:"calico-apiserver-74fc6d9f9b-", Namespace:"calico-system", SelfLink:"", UID:"58ccd239-9910-4be9-8659-4bd821d1f8db", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 33, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74fc6d9f9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ec4938f5666a6d1a6a47de69b6c00709ad72dea48e6f5040741ffda5ebf580bc", Pod:"calico-apiserver-74fc6d9f9b-pdkgn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali900d80c0efb", MAC:"2e:8a:f6:19:2b:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:34:26.994927 containerd[1580]: 2026-04-16 02:34:26.988 [INFO][4339] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ec4938f5666a6d1a6a47de69b6c00709ad72dea48e6f5040741ffda5ebf580bc" Namespace="calico-system" Pod="calico-apiserver-74fc6d9f9b-pdkgn" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fc6d9f9b--pdkgn-eth0" Apr 16 02:34:27.054971 containerd[1580]: time="2026-04-16T02:34:27.054865076Z" level=info msg="connecting to shim ec4938f5666a6d1a6a47de69b6c00709ad72dea48e6f5040741ffda5ebf580bc" address="unix:///run/containerd/s/a31268da652ab0630f4d26a03cdcd2f3cbc7556b0b3b5a66c8bd159da385ef10" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:34:27.132819 systemd[1]: Started cri-containerd-ec4938f5666a6d1a6a47de69b6c00709ad72dea48e6f5040741ffda5ebf580bc.scope - libcontainer container ec4938f5666a6d1a6a47de69b6c00709ad72dea48e6f5040741ffda5ebf580bc. Apr 16 02:34:27.216878 systemd-networkd[1491]: calied5023c6ac6: Link UP Apr 16 02:34:27.217836 systemd-networkd[1491]: calied5023c6ac6: Gained carrier Apr 16 02:34:27.223048 systemd-resolved[1494]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 02:34:27.252940 containerd[1580]: 2026-04-16 02:34:26.716 [INFO][4333] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--74fc6d9f9b--xt2mx-eth0 calico-apiserver-74fc6d9f9b- calico-system f4ac2bdd-d2c4-4fbd-a124-1eb72e878921 894 0 2026-04-16 02:33:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74fc6d9f9b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-74fc6d9f9b-xt2mx eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calied5023c6ac6 [] [] }} ContainerID="a8f6b1e774105b38ad0630b0c320d352a0816ce5b5c01e41e12486970a66bc0e" Namespace="calico-system" Pod="calico-apiserver-74fc6d9f9b-xt2mx" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fc6d9f9b--xt2mx-" Apr 16 02:34:27.252940 containerd[1580]: 2026-04-16 02:34:26.716 [INFO][4333] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a8f6b1e774105b38ad0630b0c320d352a0816ce5b5c01e41e12486970a66bc0e" Namespace="calico-system" Pod="calico-apiserver-74fc6d9f9b-xt2mx" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fc6d9f9b--xt2mx-eth0" Apr 16 02:34:27.252940 containerd[1580]: 2026-04-16 02:34:26.760 [INFO][4361] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8f6b1e774105b38ad0630b0c320d352a0816ce5b5c01e41e12486970a66bc0e" HandleID="k8s-pod-network.a8f6b1e774105b38ad0630b0c320d352a0816ce5b5c01e41e12486970a66bc0e" Workload="localhost-k8s-calico--apiserver--74fc6d9f9b--xt2mx-eth0" Apr 16 02:34:27.253549 containerd[1580]: 2026-04-16 02:34:26.778 [INFO][4361] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a8f6b1e774105b38ad0630b0c320d352a0816ce5b5c01e41e12486970a66bc0e" HandleID="k8s-pod-network.a8f6b1e774105b38ad0630b0c320d352a0816ce5b5c01e41e12486970a66bc0e" Workload="localhost-k8s-calico--apiserver--74fc6d9f9b--xt2mx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fda40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-74fc6d9f9b-xt2mx", "timestamp":"2026-04-16 02:34:26.760847387 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001994a0)} Apr 16 02:34:27.253549 containerd[1580]: 2026-04-16 02:34:26.778 [INFO][4361] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 02:34:27.253549 containerd[1580]: 2026-04-16 02:34:26.930 [INFO][4361] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 02:34:27.253549 containerd[1580]: 2026-04-16 02:34:26.931 [INFO][4361] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 02:34:27.253549 containerd[1580]: 2026-04-16 02:34:26.951 [INFO][4361] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a8f6b1e774105b38ad0630b0c320d352a0816ce5b5c01e41e12486970a66bc0e" host="localhost" Apr 16 02:34:27.253549 containerd[1580]: 2026-04-16 02:34:26.981 [INFO][4361] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 02:34:27.253549 containerd[1580]: 2026-04-16 02:34:27.022 [INFO][4361] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 02:34:27.253549 containerd[1580]: 2026-04-16 02:34:27.032 [INFO][4361] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 02:34:27.253549 containerd[1580]: 2026-04-16 02:34:27.052 [INFO][4361] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 02:34:27.253549 containerd[1580]: 2026-04-16 02:34:27.068 [INFO][4361] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a8f6b1e774105b38ad0630b0c320d352a0816ce5b5c01e41e12486970a66bc0e" host="localhost" Apr 16 02:34:27.253789 containerd[1580]: 2026-04-16 02:34:27.091 [INFO][4361] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a8f6b1e774105b38ad0630b0c320d352a0816ce5b5c01e41e12486970a66bc0e Apr 16 02:34:27.253789 containerd[1580]: 2026-04-16 02:34:27.137 [INFO][4361] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a8f6b1e774105b38ad0630b0c320d352a0816ce5b5c01e41e12486970a66bc0e" host="localhost" Apr 16 02:34:27.253789 containerd[1580]: 2026-04-16 02:34:27.193 [INFO][4361] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a8f6b1e774105b38ad0630b0c320d352a0816ce5b5c01e41e12486970a66bc0e" host="localhost" Apr 16 02:34:27.253789 containerd[1580]: 2026-04-16 02:34:27.195 [INFO][4361] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a8f6b1e774105b38ad0630b0c320d352a0816ce5b5c01e41e12486970a66bc0e" host="localhost" Apr 16 02:34:27.253789 containerd[1580]: 2026-04-16 02:34:27.195 [INFO][4361] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 02:34:27.253789 containerd[1580]: 2026-04-16 02:34:27.196 [INFO][4361] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a8f6b1e774105b38ad0630b0c320d352a0816ce5b5c01e41e12486970a66bc0e" HandleID="k8s-pod-network.a8f6b1e774105b38ad0630b0c320d352a0816ce5b5c01e41e12486970a66bc0e" Workload="localhost-k8s-calico--apiserver--74fc6d9f9b--xt2mx-eth0" Apr 16 02:34:27.253896 containerd[1580]: 2026-04-16 02:34:27.202 [INFO][4333] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a8f6b1e774105b38ad0630b0c320d352a0816ce5b5c01e41e12486970a66bc0e" Namespace="calico-system" Pod="calico-apiserver-74fc6d9f9b-xt2mx" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fc6d9f9b--xt2mx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74fc6d9f9b--xt2mx-eth0", GenerateName:"calico-apiserver-74fc6d9f9b-", Namespace:"calico-system", SelfLink:"", UID:"f4ac2bdd-d2c4-4fbd-a124-1eb72e878921", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 33, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74fc6d9f9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-74fc6d9f9b-xt2mx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calied5023c6ac6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:34:27.253965 containerd[1580]: 2026-04-16 02:34:27.202 [INFO][4333] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a8f6b1e774105b38ad0630b0c320d352a0816ce5b5c01e41e12486970a66bc0e" Namespace="calico-system" Pod="calico-apiserver-74fc6d9f9b-xt2mx" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fc6d9f9b--xt2mx-eth0" Apr 16 02:34:27.253965 containerd[1580]: 2026-04-16 02:34:27.202 [INFO][4333] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied5023c6ac6 ContainerID="a8f6b1e774105b38ad0630b0c320d352a0816ce5b5c01e41e12486970a66bc0e" Namespace="calico-system" Pod="calico-apiserver-74fc6d9f9b-xt2mx" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fc6d9f9b--xt2mx-eth0" Apr 16 02:34:27.253965 containerd[1580]: 2026-04-16 02:34:27.217 [INFO][4333] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8f6b1e774105b38ad0630b0c320d352a0816ce5b5c01e41e12486970a66bc0e" Namespace="calico-system" Pod="calico-apiserver-74fc6d9f9b-xt2mx" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fc6d9f9b--xt2mx-eth0" Apr 16 02:34:27.254071 containerd[1580]: 2026-04-16 02:34:27.220 [INFO][4333] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a8f6b1e774105b38ad0630b0c320d352a0816ce5b5c01e41e12486970a66bc0e" Namespace="calico-system" Pod="calico-apiserver-74fc6d9f9b-xt2mx" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fc6d9f9b--xt2mx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74fc6d9f9b--xt2mx-eth0", GenerateName:"calico-apiserver-74fc6d9f9b-", Namespace:"calico-system", SelfLink:"", UID:"f4ac2bdd-d2c4-4fbd-a124-1eb72e878921", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 33, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74fc6d9f9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a8f6b1e774105b38ad0630b0c320d352a0816ce5b5c01e41e12486970a66bc0e", Pod:"calico-apiserver-74fc6d9f9b-xt2mx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calied5023c6ac6", MAC:"42:bf:78:12:b9:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:34:27.254134 containerd[1580]: 2026-04-16 02:34:27.243 [INFO][4333] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a8f6b1e774105b38ad0630b0c320d352a0816ce5b5c01e41e12486970a66bc0e" Namespace="calico-system" Pod="calico-apiserver-74fc6d9f9b-xt2mx" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fc6d9f9b--xt2mx-eth0" Apr 16 02:34:27.336043 containerd[1580]: time="2026-04-16T02:34:27.335798459Z" level=info msg="connecting to shim a8f6b1e774105b38ad0630b0c320d352a0816ce5b5c01e41e12486970a66bc0e" address="unix:///run/containerd/s/5f0aba82a8a9744f4b387b0956d5896103121f3797b2474286b174381bf1cd31" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:34:27.414464 containerd[1580]: time="2026-04-16T02:34:27.414204263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74fc6d9f9b-pdkgn,Uid:58ccd239-9910-4be9-8659-4bd821d1f8db,Namespace:calico-system,Attempt:0,} returns sandbox id \"ec4938f5666a6d1a6a47de69b6c00709ad72dea48e6f5040741ffda5ebf580bc\"" Apr 16 02:34:27.425405 containerd[1580]: time="2026-04-16T02:34:27.425369360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 02:34:27.436102 systemd[1]: Started cri-containerd-a8f6b1e774105b38ad0630b0c320d352a0816ce5b5c01e41e12486970a66bc0e.scope - libcontainer container a8f6b1e774105b38ad0630b0c320d352a0816ce5b5c01e41e12486970a66bc0e. Apr 16 02:34:27.465939 systemd-resolved[1494]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 02:34:27.609169 containerd[1580]: time="2026-04-16T02:34:27.608787869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74fc6d9f9b-xt2mx,Uid:f4ac2bdd-d2c4-4fbd-a124-1eb72e878921,Namespace:calico-system,Attempt:0,} returns sandbox id \"a8f6b1e774105b38ad0630b0c320d352a0816ce5b5c01e41e12486970a66bc0e\"" Apr 16 02:34:27.656285 containerd[1580]: time="2026-04-16T02:34:27.656000491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rtc2h,Uid:849bda9a-9ad3-487b-a536-ce8b4d7a8312,Namespace:kube-system,Attempt:0,}" Apr 16 02:34:27.662056 containerd[1580]: time="2026-04-16T02:34:27.661659229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2l7kg,Uid:948e86e4-5504-4096-ac1e-cb13b7489af3,Namespace:calico-system,Attempt:0,}" Apr 16 02:34:28.177531 systemd-networkd[1491]: cali5126d76458a: Link UP Apr 16 02:34:28.180556 systemd-networkd[1491]: cali5126d76458a: Gained carrier Apr 16 02:34:28.213552 containerd[1580]: 2026-04-16 02:34:27.816 [INFO][4512] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--rtc2h-eth0 coredns-66bc5c9577- kube-system 849bda9a-9ad3-487b-a536-ce8b4d7a8312 890 0 2026-04-16 02:33:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-rtc2h eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5126d76458a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="baae11d2ca7869516dc9757b6b1a8ea9f41d19b093e51669bcfdc491467881e9" Namespace="kube-system" Pod="coredns-66bc5c9577-rtc2h" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rtc2h-" Apr 16 02:34:28.213552 containerd[1580]: 2026-04-16 02:34:27.818 [INFO][4512] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="baae11d2ca7869516dc9757b6b1a8ea9f41d19b093e51669bcfdc491467881e9" Namespace="kube-system" Pod="coredns-66bc5c9577-rtc2h" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rtc2h-eth0" Apr 16 02:34:28.213552 containerd[1580]: 2026-04-16 02:34:27.943 [INFO][4541] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="baae11d2ca7869516dc9757b6b1a8ea9f41d19b093e51669bcfdc491467881e9" HandleID="k8s-pod-network.baae11d2ca7869516dc9757b6b1a8ea9f41d19b093e51669bcfdc491467881e9" Workload="localhost-k8s-coredns--66bc5c9577--rtc2h-eth0" Apr 16 02:34:28.215401 containerd[1580]: 2026-04-16 02:34:27.965 [INFO][4541] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="baae11d2ca7869516dc9757b6b1a8ea9f41d19b093e51669bcfdc491467881e9" HandleID="k8s-pod-network.baae11d2ca7869516dc9757b6b1a8ea9f41d19b093e51669bcfdc491467881e9" Workload="localhost-k8s-coredns--66bc5c9577--rtc2h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000431570), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-rtc2h", "timestamp":"2026-04-16 02:34:27.943591262 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000396f20)} Apr 16 02:34:28.215401 containerd[1580]: 2026-04-16 02:34:27.965 [INFO][4541] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 02:34:28.215401 containerd[1580]: 2026-04-16 02:34:27.965 [INFO][4541] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 02:34:28.215401 containerd[1580]: 2026-04-16 02:34:27.965 [INFO][4541] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 02:34:28.215401 containerd[1580]: 2026-04-16 02:34:27.976 [INFO][4541] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.baae11d2ca7869516dc9757b6b1a8ea9f41d19b093e51669bcfdc491467881e9" host="localhost" Apr 16 02:34:28.215401 containerd[1580]: 2026-04-16 02:34:28.014 [INFO][4541] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 02:34:28.215401 containerd[1580]: 2026-04-16 02:34:28.036 [INFO][4541] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 02:34:28.215401 containerd[1580]: 2026-04-16 02:34:28.046 [INFO][4541] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 02:34:28.215401 containerd[1580]: 2026-04-16 02:34:28.109 [INFO][4541] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 02:34:28.215401 containerd[1580]: 2026-04-16 02:34:28.109 [INFO][4541] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.baae11d2ca7869516dc9757b6b1a8ea9f41d19b093e51669bcfdc491467881e9" host="localhost" Apr 16 02:34:28.220032 containerd[1580]: 2026-04-16 02:34:28.118 [INFO][4541] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.baae11d2ca7869516dc9757b6b1a8ea9f41d19b093e51669bcfdc491467881e9 Apr 16 02:34:28.220032 containerd[1580]: 2026-04-16 02:34:28.137 [INFO][4541] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.baae11d2ca7869516dc9757b6b1a8ea9f41d19b093e51669bcfdc491467881e9" host="localhost" Apr 16 02:34:28.220032 containerd[1580]: 2026-04-16 02:34:28.161 [INFO][4541] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.baae11d2ca7869516dc9757b6b1a8ea9f41d19b093e51669bcfdc491467881e9" host="localhost" Apr 16 02:34:28.220032 containerd[1580]: 2026-04-16 02:34:28.161 [INFO][4541] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.baae11d2ca7869516dc9757b6b1a8ea9f41d19b093e51669bcfdc491467881e9" host="localhost" Apr 16 02:34:28.220032 containerd[1580]: 2026-04-16 02:34:28.161 [INFO][4541] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 02:34:28.220032 containerd[1580]: 2026-04-16 02:34:28.161 [INFO][4541] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="baae11d2ca7869516dc9757b6b1a8ea9f41d19b093e51669bcfdc491467881e9" HandleID="k8s-pod-network.baae11d2ca7869516dc9757b6b1a8ea9f41d19b093e51669bcfdc491467881e9" Workload="localhost-k8s-coredns--66bc5c9577--rtc2h-eth0" Apr 16 02:34:28.220527 containerd[1580]: 2026-04-16 02:34:28.164 [INFO][4512] cni-plugin/k8s.go 418: Populated endpoint ContainerID="baae11d2ca7869516dc9757b6b1a8ea9f41d19b093e51669bcfdc491467881e9" Namespace="kube-system" Pod="coredns-66bc5c9577-rtc2h" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rtc2h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--rtc2h-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"849bda9a-9ad3-487b-a536-ce8b4d7a8312", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 33, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-rtc2h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5126d76458a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:34:28.220527 containerd[1580]: 2026-04-16 02:34:28.164 [INFO][4512] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="baae11d2ca7869516dc9757b6b1a8ea9f41d19b093e51669bcfdc491467881e9" Namespace="kube-system" Pod="coredns-66bc5c9577-rtc2h" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rtc2h-eth0" Apr 16 02:34:28.220527 containerd[1580]: 2026-04-16 02:34:28.165 [INFO][4512] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5126d76458a ContainerID="baae11d2ca7869516dc9757b6b1a8ea9f41d19b093e51669bcfdc491467881e9" Namespace="kube-system" Pod="coredns-66bc5c9577-rtc2h" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rtc2h-eth0" Apr 16 02:34:28.220527 containerd[1580]: 2026-04-16 02:34:28.182 [INFO][4512] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="baae11d2ca7869516dc9757b6b1a8ea9f41d19b093e51669bcfdc491467881e9" Namespace="kube-system" Pod="coredns-66bc5c9577-rtc2h" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rtc2h-eth0" Apr 16 02:34:28.220527 containerd[1580]: 2026-04-16 02:34:28.184 [INFO][4512] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="baae11d2ca7869516dc9757b6b1a8ea9f41d19b093e51669bcfdc491467881e9" Namespace="kube-system" Pod="coredns-66bc5c9577-rtc2h" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rtc2h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--rtc2h-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"849bda9a-9ad3-487b-a536-ce8b4d7a8312", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 33, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"baae11d2ca7869516dc9757b6b1a8ea9f41d19b093e51669bcfdc491467881e9", Pod:"coredns-66bc5c9577-rtc2h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5126d76458a", MAC:"be:59:f3:78:0b:be", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:34:28.220527 containerd[1580]: 2026-04-16 02:34:28.205 [INFO][4512] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="baae11d2ca7869516dc9757b6b1a8ea9f41d19b093e51669bcfdc491467881e9" Namespace="kube-system" Pod="coredns-66bc5c9577-rtc2h" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rtc2h-eth0" Apr 16 02:34:28.346065 containerd[1580]: time="2026-04-16T02:34:28.345929128Z" level=info msg="connecting to shim baae11d2ca7869516dc9757b6b1a8ea9f41d19b093e51669bcfdc491467881e9" address="unix:///run/containerd/s/1042c30f035495c1bb01a373ffed276cba49f6138e96b862edc8d8d4c30a6b17" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:34:28.423143 systemd-networkd[1491]: cali789f00e8998: Link UP Apr 16 02:34:28.427445 systemd-networkd[1491]: cali789f00e8998: Gained carrier Apr 16 02:34:28.439900 systemd[1]: Started cri-containerd-baae11d2ca7869516dc9757b6b1a8ea9f41d19b093e51669bcfdc491467881e9.scope - libcontainer container baae11d2ca7869516dc9757b6b1a8ea9f41d19b093e51669bcfdc491467881e9. Apr 16 02:34:28.474885 containerd[1580]: 2026-04-16 02:34:27.834 [INFO][4523] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--2l7kg-eth0 csi-node-driver- calico-system 948e86e4-5504-4096-ac1e-cb13b7489af3 739 0 2026-04-16 02:33:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-2l7kg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali789f00e8998 [] [] }} ContainerID="a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78" Namespace="calico-system" Pod="csi-node-driver-2l7kg" WorkloadEndpoint="localhost-k8s-csi--node--driver--2l7kg-" Apr 16 02:34:28.474885 containerd[1580]: 2026-04-16 02:34:27.836 [INFO][4523] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78" Namespace="calico-system" Pod="csi-node-driver-2l7kg" WorkloadEndpoint="localhost-k8s-csi--node--driver--2l7kg-eth0" Apr 16 02:34:28.474885 containerd[1580]: 2026-04-16 02:34:27.964 [INFO][4547] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78" HandleID="k8s-pod-network.a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78" Workload="localhost-k8s-csi--node--driver--2l7kg-eth0" Apr 16 02:34:28.474885 containerd[1580]: 2026-04-16 02:34:27.982 [INFO][4547] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78" HandleID="k8s-pod-network.a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78" Workload="localhost-k8s-csi--node--driver--2l7kg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e170), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-2l7kg", "timestamp":"2026-04-16 02:34:27.964428235 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00029adc0)} Apr 16 02:34:28.474885 containerd[1580]: 2026-04-16 02:34:27.983 [INFO][4547] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 02:34:28.474885 containerd[1580]: 2026-04-16 02:34:28.161 [INFO][4547] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 02:34:28.474885 containerd[1580]: 2026-04-16 02:34:28.162 [INFO][4547] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 02:34:28.474885 containerd[1580]: 2026-04-16 02:34:28.176 [INFO][4547] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78" host="localhost" Apr 16 02:34:28.474885 containerd[1580]: 2026-04-16 02:34:28.193 [INFO][4547] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 02:34:28.474885 containerd[1580]: 2026-04-16 02:34:28.220 [INFO][4547] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 02:34:28.474885 containerd[1580]: 2026-04-16 02:34:28.230 [INFO][4547] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 02:34:28.474885 containerd[1580]: 2026-04-16 02:34:28.304 [INFO][4547] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 02:34:28.474885 containerd[1580]: 2026-04-16 02:34:28.304 [INFO][4547] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78" host="localhost" Apr 16 02:34:28.474885 containerd[1580]: 2026-04-16 02:34:28.312 [INFO][4547] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78 Apr 16 02:34:28.474885 containerd[1580]: 2026-04-16 02:34:28.339 [INFO][4547] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78" host="localhost" Apr 16 02:34:28.474885 containerd[1580]: 2026-04-16 02:34:28.392 [INFO][4547] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78" host="localhost" Apr 16 02:34:28.474885 containerd[1580]: 2026-04-16 02:34:28.393 [INFO][4547] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78" host="localhost" Apr 16 02:34:28.474885 containerd[1580]: 2026-04-16 02:34:28.394 [INFO][4547] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 02:34:28.474885 containerd[1580]: 2026-04-16 02:34:28.395 [INFO][4547] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78" HandleID="k8s-pod-network.a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78" Workload="localhost-k8s-csi--node--driver--2l7kg-eth0" Apr 16 02:34:28.477435 systemd-resolved[1494]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 02:34:28.478949 containerd[1580]: 2026-04-16 02:34:28.405 [INFO][4523] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78" Namespace="calico-system" Pod="csi-node-driver-2l7kg" WorkloadEndpoint="localhost-k8s-csi--node--driver--2l7kg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2l7kg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"948e86e4-5504-4096-ac1e-cb13b7489af3", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 33, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-2l7kg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali789f00e8998", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:34:28.478949 containerd[1580]: 2026-04-16 02:34:28.405 [INFO][4523] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78" Namespace="calico-system" Pod="csi-node-driver-2l7kg" WorkloadEndpoint="localhost-k8s-csi--node--driver--2l7kg-eth0" Apr 16 02:34:28.478949 containerd[1580]: 2026-04-16 02:34:28.405 [INFO][4523] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali789f00e8998 ContainerID="a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78" Namespace="calico-system" Pod="csi-node-driver-2l7kg" WorkloadEndpoint="localhost-k8s-csi--node--driver--2l7kg-eth0" Apr 16 02:34:28.478949 containerd[1580]: 2026-04-16 02:34:28.435 [INFO][4523] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78" Namespace="calico-system" Pod="csi-node-driver-2l7kg" WorkloadEndpoint="localhost-k8s-csi--node--driver--2l7kg-eth0" Apr 16 02:34:28.478949 containerd[1580]: 2026-04-16 02:34:28.438 [INFO][4523] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78" Namespace="calico-system" Pod="csi-node-driver-2l7kg" WorkloadEndpoint="localhost-k8s-csi--node--driver--2l7kg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2l7kg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"948e86e4-5504-4096-ac1e-cb13b7489af3", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 33, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78", Pod:"csi-node-driver-2l7kg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali789f00e8998", MAC:"da:54:d3:76:5b:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:34:28.478949 containerd[1580]: 2026-04-16 02:34:28.468 [INFO][4523] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78" Namespace="calico-system" Pod="csi-node-driver-2l7kg" WorkloadEndpoint="localhost-k8s-csi--node--driver--2l7kg-eth0" Apr 16 02:34:28.629382 containerd[1580]: time="2026-04-16T02:34:28.629117455Z" level=info msg="connecting to shim a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78" address="unix:///run/containerd/s/f813b57324f6f9e0fd15c3fe5d74558e815c4c1ee895e57e5dd16dc051e26ad9" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:34:28.630332 containerd[1580]: time="2026-04-16T02:34:28.630246861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rtc2h,Uid:849bda9a-9ad3-487b-a536-ce8b4d7a8312,Namespace:kube-system,Attempt:0,} returns sandbox id \"baae11d2ca7869516dc9757b6b1a8ea9f41d19b093e51669bcfdc491467881e9\"" Apr 16 02:34:28.674443 containerd[1580]: time="2026-04-16T02:34:28.673718844Z" level=info msg="CreateContainer within sandbox \"baae11d2ca7869516dc9757b6b1a8ea9f41d19b093e51669bcfdc491467881e9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 02:34:28.678392 containerd[1580]: time="2026-04-16T02:34:28.678137284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-kbk5r,Uid:16700172-7e74-43cf-8388-58d9547167db,Namespace:calico-system,Attempt:0,}" Apr 16 02:34:28.713981 systemd[1]: Started cri-containerd-a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78.scope - libcontainer container a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78. Apr 16 02:34:28.747858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1250579243.mount: Deactivated successfully. Apr 16 02:34:28.833351 containerd[1580]: time="2026-04-16T02:34:28.833110829Z" level=info msg="Container e904de0db84af736e11f0cd198a9b9d9d712b80217245bf09d7c40e46fc54545: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:34:28.843501 systemd-networkd[1491]: cali900d80c0efb: Gained IPv6LL Apr 16 02:34:28.868458 systemd-resolved[1494]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 02:34:28.899615 containerd[1580]: time="2026-04-16T02:34:28.898872356Z" level=info msg="CreateContainer within sandbox \"baae11d2ca7869516dc9757b6b1a8ea9f41d19b093e51669bcfdc491467881e9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e904de0db84af736e11f0cd198a9b9d9d712b80217245bf09d7c40e46fc54545\"" Apr 16 02:34:28.908385 containerd[1580]: time="2026-04-16T02:34:28.906433206Z" level=info msg="StartContainer for \"e904de0db84af736e11f0cd198a9b9d9d712b80217245bf09d7c40e46fc54545\"" Apr 16 02:34:28.914347 containerd[1580]: time="2026-04-16T02:34:28.914146194Z" level=info msg="connecting to shim e904de0db84af736e11f0cd198a9b9d9d712b80217245bf09d7c40e46fc54545" address="unix:///run/containerd/s/1042c30f035495c1bb01a373ffed276cba49f6138e96b862edc8d8d4c30a6b17" protocol=ttrpc version=3 Apr 16 02:34:28.943796 containerd[1580]: time="2026-04-16T02:34:28.943680388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2l7kg,Uid:948e86e4-5504-4096-ac1e-cb13b7489af3,Namespace:calico-system,Attempt:0,} returns sandbox id \"a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78\"" Apr 16 02:34:28.980241 systemd[1]: Started cri-containerd-e904de0db84af736e11f0cd198a9b9d9d712b80217245bf09d7c40e46fc54545.scope - libcontainer container e904de0db84af736e11f0cd198a9b9d9d712b80217245bf09d7c40e46fc54545. Apr 16 02:34:29.088586 containerd[1580]: time="2026-04-16T02:34:29.088365823Z" level=info msg="StartContainer for \"e904de0db84af736e11f0cd198a9b9d9d712b80217245bf09d7c40e46fc54545\" returns successfully" Apr 16 02:34:29.100879 systemd-networkd[1491]: calied5023c6ac6: Gained IPv6LL Apr 16 02:34:29.317709 systemd-networkd[1491]: cali295d7d863b9: Link UP Apr 16 02:34:29.319571 systemd-networkd[1491]: cali295d7d863b9: Gained carrier Apr 16 02:34:29.394431 containerd[1580]: 2026-04-16 02:34:28.937 [INFO][4673] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--cccfbd5cf--kbk5r-eth0 goldmane-cccfbd5cf- calico-system 16700172-7e74-43cf-8388-58d9547167db 891 0 2026-04-16 02:33:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-cccfbd5cf-kbk5r eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali295d7d863b9 [] [] }} ContainerID="e568a928abb9cc71a4e5e0f2bfb444e6b163c1ef314fd0544ac350bcf300aa15" Namespace="calico-system" Pod="goldmane-cccfbd5cf-kbk5r" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--kbk5r-" Apr 16 02:34:29.394431 containerd[1580]: 2026-04-16 02:34:28.938 [INFO][4673] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e568a928abb9cc71a4e5e0f2bfb444e6b163c1ef314fd0544ac350bcf300aa15" Namespace="calico-system" Pod="goldmane-cccfbd5cf-kbk5r" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--kbk5r-eth0" Apr 16 02:34:29.394431 containerd[1580]: 2026-04-16 02:34:29.081 [INFO][4718] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e568a928abb9cc71a4e5e0f2bfb444e6b163c1ef314fd0544ac350bcf300aa15" HandleID="k8s-pod-network.e568a928abb9cc71a4e5e0f2bfb444e6b163c1ef314fd0544ac350bcf300aa15" Workload="localhost-k8s-goldmane--cccfbd5cf--kbk5r-eth0" Apr 16 02:34:29.394431 containerd[1580]: 2026-04-16 02:34:29.098 [INFO][4718] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e568a928abb9cc71a4e5e0f2bfb444e6b163c1ef314fd0544ac350bcf300aa15" HandleID="k8s-pod-network.e568a928abb9cc71a4e5e0f2bfb444e6b163c1ef314fd0544ac350bcf300aa15" Workload="localhost-k8s-goldmane--cccfbd5cf--kbk5r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000bf8b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-cccfbd5cf-kbk5r", "timestamp":"2026-04-16 02:34:29.081339431 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001f46e0)} Apr 16 02:34:29.394431 containerd[1580]: 2026-04-16 02:34:29.098 [INFO][4718] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 02:34:29.394431 containerd[1580]: 2026-04-16 02:34:29.098 [INFO][4718] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 02:34:29.394431 containerd[1580]: 2026-04-16 02:34:29.098 [INFO][4718] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 02:34:29.394431 containerd[1580]: 2026-04-16 02:34:29.111 [INFO][4718] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e568a928abb9cc71a4e5e0f2bfb444e6b163c1ef314fd0544ac350bcf300aa15" host="localhost" Apr 16 02:34:29.394431 containerd[1580]: 2026-04-16 02:34:29.145 [INFO][4718] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 02:34:29.394431 containerd[1580]: 2026-04-16 02:34:29.168 [INFO][4718] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 02:34:29.394431 containerd[1580]: 2026-04-16 02:34:29.179 [INFO][4718] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 02:34:29.394431 containerd[1580]: 2026-04-16 02:34:29.188 [INFO][4718] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 02:34:29.394431 containerd[1580]: 2026-04-16 02:34:29.188 [INFO][4718] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e568a928abb9cc71a4e5e0f2bfb444e6b163c1ef314fd0544ac350bcf300aa15" host="localhost" Apr 16 02:34:29.394431 containerd[1580]: 2026-04-16 02:34:29.199 [INFO][4718] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e568a928abb9cc71a4e5e0f2bfb444e6b163c1ef314fd0544ac350bcf300aa15 Apr 16 02:34:29.394431 containerd[1580]: 2026-04-16 02:34:29.226 [INFO][4718] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e568a928abb9cc71a4e5e0f2bfb444e6b163c1ef314fd0544ac350bcf300aa15" host="localhost" Apr 16 02:34:29.394431 containerd[1580]: 2026-04-16 02:34:29.300 [INFO][4718] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e568a928abb9cc71a4e5e0f2bfb444e6b163c1ef314fd0544ac350bcf300aa15" host="localhost" Apr 16 02:34:29.394431 containerd[1580]: 2026-04-16 02:34:29.301 [INFO][4718] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e568a928abb9cc71a4e5e0f2bfb444e6b163c1ef314fd0544ac350bcf300aa15" host="localhost" Apr 16 02:34:29.394431 containerd[1580]: 2026-04-16 02:34:29.302 [INFO][4718] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 02:34:29.394431 containerd[1580]: 2026-04-16 02:34:29.303 [INFO][4718] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e568a928abb9cc71a4e5e0f2bfb444e6b163c1ef314fd0544ac350bcf300aa15" HandleID="k8s-pod-network.e568a928abb9cc71a4e5e0f2bfb444e6b163c1ef314fd0544ac350bcf300aa15" Workload="localhost-k8s-goldmane--cccfbd5cf--kbk5r-eth0" Apr 16 02:34:29.397075 containerd[1580]: 2026-04-16 02:34:29.309 [INFO][4673] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e568a928abb9cc71a4e5e0f2bfb444e6b163c1ef314fd0544ac350bcf300aa15" Namespace="calico-system" Pod="goldmane-cccfbd5cf-kbk5r" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--kbk5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--kbk5r-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"16700172-7e74-43cf-8388-58d9547167db", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 33, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-cccfbd5cf-kbk5r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali295d7d863b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:34:29.397075 containerd[1580]: 2026-04-16 02:34:29.310 [INFO][4673] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e568a928abb9cc71a4e5e0f2bfb444e6b163c1ef314fd0544ac350bcf300aa15" Namespace="calico-system" Pod="goldmane-cccfbd5cf-kbk5r" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--kbk5r-eth0" Apr 16 02:34:29.397075 containerd[1580]: 2026-04-16 02:34:29.310 [INFO][4673] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali295d7d863b9 ContainerID="e568a928abb9cc71a4e5e0f2bfb444e6b163c1ef314fd0544ac350bcf300aa15" Namespace="calico-system" Pod="goldmane-cccfbd5cf-kbk5r" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--kbk5r-eth0" Apr 16 02:34:29.397075 containerd[1580]: 2026-04-16 02:34:29.320 [INFO][4673] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e568a928abb9cc71a4e5e0f2bfb444e6b163c1ef314fd0544ac350bcf300aa15" Namespace="calico-system" Pod="goldmane-cccfbd5cf-kbk5r" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--kbk5r-eth0" Apr 16 02:34:29.397075 containerd[1580]: 2026-04-16 02:34:29.328 [INFO][4673] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e568a928abb9cc71a4e5e0f2bfb444e6b163c1ef314fd0544ac350bcf300aa15" Namespace="calico-system" Pod="goldmane-cccfbd5cf-kbk5r" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--kbk5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--kbk5r-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"16700172-7e74-43cf-8388-58d9547167db", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 33, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e568a928abb9cc71a4e5e0f2bfb444e6b163c1ef314fd0544ac350bcf300aa15", Pod:"goldmane-cccfbd5cf-kbk5r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali295d7d863b9", MAC:"f2:05:b5:1e:bd:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:34:29.397075 containerd[1580]: 2026-04-16 02:34:29.384 [INFO][4673] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e568a928abb9cc71a4e5e0f2bfb444e6b163c1ef314fd0544ac350bcf300aa15" Namespace="calico-system" Pod="goldmane-cccfbd5cf-kbk5r" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--kbk5r-eth0" Apr 16 02:34:29.494006 systemd-networkd[1491]: cali5126d76458a: Gained IPv6LL Apr 16 02:34:29.545442 containerd[1580]: time="2026-04-16T02:34:29.545187816Z" level=info msg="connecting to shim e568a928abb9cc71a4e5e0f2bfb444e6b163c1ef314fd0544ac350bcf300aa15" address="unix:///run/containerd/s/9ab5441f7401cc7fe92cbb19ec26537075c00ed9f7a94b759de337bbf2f417c6" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:34:29.626436 systemd[1]: Started cri-containerd-e568a928abb9cc71a4e5e0f2bfb444e6b163c1ef314fd0544ac350bcf300aa15.scope - libcontainer container e568a928abb9cc71a4e5e0f2bfb444e6b163c1ef314fd0544ac350bcf300aa15. Apr 16 02:34:29.655872 containerd[1580]: time="2026-04-16T02:34:29.655729639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jc7c4,Uid:8b236eee-56e9-410d-a6b6-5768df51bbdf,Namespace:kube-system,Attempt:0,}" Apr 16 02:34:29.669895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1516892329.mount: Deactivated successfully. Apr 16 02:34:29.716286 kubelet[2735]: I0416 02:34:29.716004 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rtc2h" podStartSLOduration=56.715963311 podStartE2EDuration="56.715963311s" podCreationTimestamp="2026-04-16 02:33:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 02:34:29.619896132 +0000 UTC m=+64.066927609" watchObservedRunningTime="2026-04-16 02:34:29.715963311 +0000 UTC m=+64.162994793" Apr 16 02:34:29.838419 systemd-resolved[1494]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 02:34:30.021069 containerd[1580]: time="2026-04-16T02:34:30.020880998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-kbk5r,Uid:16700172-7e74-43cf-8388-58d9547167db,Namespace:calico-system,Attempt:0,} returns sandbox id \"e568a928abb9cc71a4e5e0f2bfb444e6b163c1ef314fd0544ac350bcf300aa15\"" Apr 16 02:34:30.196413 systemd-networkd[1491]: cali789f00e8998: Gained IPv6LL Apr 16 02:34:30.328476 systemd-networkd[1491]: cali2f117522cf6: Link UP Apr 16 02:34:30.331145 systemd-networkd[1491]: cali2f117522cf6: Gained carrier Apr 16 02:34:30.365288 containerd[1580]: 2026-04-16 02:34:29.932 [INFO][4805] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--jc7c4-eth0 coredns-66bc5c9577- kube-system 8b236eee-56e9-410d-a6b6-5768df51bbdf 883 0 2026-04-16 02:33:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-jc7c4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2f117522cf6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="623ec65805fadb8eedbc557d65e18e07cfdb6da0332d5367b8cc74b956dc828c" Namespace="kube-system" Pod="coredns-66bc5c9577-jc7c4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--jc7c4-" Apr 16 02:34:30.365288 containerd[1580]: 2026-04-16 02:34:29.933 [INFO][4805] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="623ec65805fadb8eedbc557d65e18e07cfdb6da0332d5367b8cc74b956dc828c" Namespace="kube-system" Pod="coredns-66bc5c9577-jc7c4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--jc7c4-eth0" Apr 16 02:34:30.365288 containerd[1580]: 2026-04-16 02:34:30.069 [INFO][4839] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="623ec65805fadb8eedbc557d65e18e07cfdb6da0332d5367b8cc74b956dc828c" HandleID="k8s-pod-network.623ec65805fadb8eedbc557d65e18e07cfdb6da0332d5367b8cc74b956dc828c" Workload="localhost-k8s-coredns--66bc5c9577--jc7c4-eth0" Apr 16 02:34:30.365288 containerd[1580]: 2026-04-16 02:34:30.091 [INFO][4839] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="623ec65805fadb8eedbc557d65e18e07cfdb6da0332d5367b8cc74b956dc828c" HandleID="k8s-pod-network.623ec65805fadb8eedbc557d65e18e07cfdb6da0332d5367b8cc74b956dc828c" Workload="localhost-k8s-coredns--66bc5c9577--jc7c4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003698d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-jc7c4", "timestamp":"2026-04-16 02:34:30.069071089 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002ed1e0)} Apr 16 02:34:30.365288 containerd[1580]: 2026-04-16 02:34:30.092 [INFO][4839] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 02:34:30.365288 containerd[1580]: 2026-04-16 02:34:30.092 [INFO][4839] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 02:34:30.365288 containerd[1580]: 2026-04-16 02:34:30.092 [INFO][4839] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 02:34:30.365288 containerd[1580]: 2026-04-16 02:34:30.102 [INFO][4839] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.623ec65805fadb8eedbc557d65e18e07cfdb6da0332d5367b8cc74b956dc828c" host="localhost" Apr 16 02:34:30.365288 containerd[1580]: 2026-04-16 02:34:30.126 [INFO][4839] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 02:34:30.365288 containerd[1580]: 2026-04-16 02:34:30.207 [INFO][4839] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 02:34:30.365288 containerd[1580]: 2026-04-16 02:34:30.221 [INFO][4839] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 02:34:30.365288 containerd[1580]: 2026-04-16 02:34:30.243 [INFO][4839] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 02:34:30.365288 containerd[1580]: 2026-04-16 02:34:30.243 [INFO][4839] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.623ec65805fadb8eedbc557d65e18e07cfdb6da0332d5367b8cc74b956dc828c" host="localhost" Apr 16 02:34:30.365288 containerd[1580]: 2026-04-16 02:34:30.269 [INFO][4839] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.623ec65805fadb8eedbc557d65e18e07cfdb6da0332d5367b8cc74b956dc828c Apr 16 02:34:30.365288 containerd[1580]: 2026-04-16 02:34:30.286 [INFO][4839] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.623ec65805fadb8eedbc557d65e18e07cfdb6da0332d5367b8cc74b956dc828c" host="localhost" Apr 16 02:34:30.365288 containerd[1580]: 2026-04-16 02:34:30.316 [INFO][4839] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.623ec65805fadb8eedbc557d65e18e07cfdb6da0332d5367b8cc74b956dc828c" host="localhost" Apr 16 02:34:30.365288 containerd[1580]: 2026-04-16 02:34:30.316 [INFO][4839] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.623ec65805fadb8eedbc557d65e18e07cfdb6da0332d5367b8cc74b956dc828c" host="localhost" Apr 16 02:34:30.365288 containerd[1580]: 2026-04-16 02:34:30.316 [INFO][4839] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 02:34:30.365288 containerd[1580]: 2026-04-16 02:34:30.316 [INFO][4839] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="623ec65805fadb8eedbc557d65e18e07cfdb6da0332d5367b8cc74b956dc828c" HandleID="k8s-pod-network.623ec65805fadb8eedbc557d65e18e07cfdb6da0332d5367b8cc74b956dc828c" Workload="localhost-k8s-coredns--66bc5c9577--jc7c4-eth0" Apr 16 02:34:30.366353 containerd[1580]: 2026-04-16 02:34:30.322 [INFO][4805] cni-plugin/k8s.go 418: Populated endpoint ContainerID="623ec65805fadb8eedbc557d65e18e07cfdb6da0332d5367b8cc74b956dc828c" Namespace="kube-system" Pod="coredns-66bc5c9577-jc7c4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--jc7c4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--jc7c4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8b236eee-56e9-410d-a6b6-5768df51bbdf", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 33, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-jc7c4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2f117522cf6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:34:30.366353 containerd[1580]: 2026-04-16 02:34:30.323 [INFO][4805] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="623ec65805fadb8eedbc557d65e18e07cfdb6da0332d5367b8cc74b956dc828c" Namespace="kube-system" Pod="coredns-66bc5c9577-jc7c4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--jc7c4-eth0" Apr 16 02:34:30.366353 containerd[1580]: 2026-04-16 02:34:30.323 [INFO][4805] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f117522cf6 ContainerID="623ec65805fadb8eedbc557d65e18e07cfdb6da0332d5367b8cc74b956dc828c" Namespace="kube-system" Pod="coredns-66bc5c9577-jc7c4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--jc7c4-eth0" Apr 16 02:34:30.366353 containerd[1580]: 2026-04-16 02:34:30.325 [INFO][4805] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="623ec65805fadb8eedbc557d65e18e07cfdb6da0332d5367b8cc74b956dc828c" Namespace="kube-system" Pod="coredns-66bc5c9577-jc7c4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--jc7c4-eth0" Apr 16 02:34:30.366353 containerd[1580]: 2026-04-16 02:34:30.330 [INFO][4805] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="623ec65805fadb8eedbc557d65e18e07cfdb6da0332d5367b8cc74b956dc828c" Namespace="kube-system" Pod="coredns-66bc5c9577-jc7c4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--jc7c4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--jc7c4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8b236eee-56e9-410d-a6b6-5768df51bbdf", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 33, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"623ec65805fadb8eedbc557d65e18e07cfdb6da0332d5367b8cc74b956dc828c", Pod:"coredns-66bc5c9577-jc7c4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2f117522cf6", MAC:"b6:70:33:8d:3b:7f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:34:30.366353 containerd[1580]: 2026-04-16 02:34:30.363 [INFO][4805] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="623ec65805fadb8eedbc557d65e18e07cfdb6da0332d5367b8cc74b956dc828c" Namespace="kube-system" Pod="coredns-66bc5c9577-jc7c4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--jc7c4-eth0" Apr 16 02:34:30.404897 containerd[1580]: time="2026-04-16T02:34:30.404764875Z" level=info msg="connecting to shim 623ec65805fadb8eedbc557d65e18e07cfdb6da0332d5367b8cc74b956dc828c" address="unix:///run/containerd/s/850c0b6c9ec1ce25d7116d8eb8abba502e4607390668935ad93c6f45d4bcee47" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:34:30.521430 systemd[1]: Started cri-containerd-623ec65805fadb8eedbc557d65e18e07cfdb6da0332d5367b8cc74b956dc828c.scope - libcontainer container 623ec65805fadb8eedbc557d65e18e07cfdb6da0332d5367b8cc74b956dc828c. Apr 16 02:34:30.554107 systemd-resolved[1494]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 02:34:30.615341 containerd[1580]: time="2026-04-16T02:34:30.614057910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jc7c4,Uid:8b236eee-56e9-410d-a6b6-5768df51bbdf,Namespace:kube-system,Attempt:0,} returns sandbox id \"623ec65805fadb8eedbc557d65e18e07cfdb6da0332d5367b8cc74b956dc828c\"" Apr 16 02:34:30.628439 containerd[1580]: time="2026-04-16T02:34:30.628386911Z" level=info msg="CreateContainer within sandbox \"623ec65805fadb8eedbc557d65e18e07cfdb6da0332d5367b8cc74b956dc828c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 02:34:30.642644 containerd[1580]: time="2026-04-16T02:34:30.642537716Z" level=info msg="Container c69b554c77ec5e7e1765a5da5cc7c019998b4c636b235a47554d91adb5954975: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:34:30.651494 containerd[1580]: time="2026-04-16T02:34:30.651185905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cf9694cf9-kwmsb,Uid:62859b98-ece2-40d6-b938-ca7d0fe4bc04,Namespace:calico-system,Attempt:0,}" Apr 16 02:34:30.655856 containerd[1580]: time="2026-04-16T02:34:30.655810594Z" level=info msg="CreateContainer within sandbox \"623ec65805fadb8eedbc557d65e18e07cfdb6da0332d5367b8cc74b956dc828c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c69b554c77ec5e7e1765a5da5cc7c019998b4c636b235a47554d91adb5954975\"" Apr 16 02:34:30.657233 containerd[1580]: time="2026-04-16T02:34:30.657180959Z" level=info msg="StartContainer for \"c69b554c77ec5e7e1765a5da5cc7c019998b4c636b235a47554d91adb5954975\"" Apr 16 02:34:30.658174 containerd[1580]: time="2026-04-16T02:34:30.658128899Z" level=info msg="connecting to shim c69b554c77ec5e7e1765a5da5cc7c019998b4c636b235a47554d91adb5954975" address="unix:///run/containerd/s/850c0b6c9ec1ce25d7116d8eb8abba502e4607390668935ad93c6f45d4bcee47" protocol=ttrpc version=3 Apr 16 02:34:30.732830 systemd[1]: Started cri-containerd-c69b554c77ec5e7e1765a5da5cc7c019998b4c636b235a47554d91adb5954975.scope - libcontainer container c69b554c77ec5e7e1765a5da5cc7c019998b4c636b235a47554d91adb5954975. Apr 16 02:34:30.855070 containerd[1580]: time="2026-04-16T02:34:30.854898610Z" level=info msg="StartContainer for \"c69b554c77ec5e7e1765a5da5cc7c019998b4c636b235a47554d91adb5954975\" returns successfully" Apr 16 02:34:31.140856 systemd-networkd[1491]: calia722d393b5f: Link UP Apr 16 02:34:31.147170 systemd-networkd[1491]: calia722d393b5f: Gained carrier Apr 16 02:34:31.182258 containerd[1580]: 2026-04-16 02:34:30.811 [INFO][4917] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7cf9694cf9--kwmsb-eth0 calico-kube-controllers-7cf9694cf9- calico-system 62859b98-ece2-40d6-b938-ca7d0fe4bc04 888 0 2026-04-16 02:33:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7cf9694cf9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7cf9694cf9-kwmsb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia722d393b5f [] [] }} ContainerID="b81c1890f8b4a3b70cb18ca834fb766b2a784cd554635a672204837ccc6871a2" Namespace="calico-system" Pod="calico-kube-controllers-7cf9694cf9-kwmsb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf9694cf9--kwmsb-" Apr 16 02:34:31.182258 containerd[1580]: 2026-04-16 02:34:30.812 [INFO][4917] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b81c1890f8b4a3b70cb18ca834fb766b2a784cd554635a672204837ccc6871a2" Namespace="calico-system" Pod="calico-kube-controllers-7cf9694cf9-kwmsb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf9694cf9--kwmsb-eth0" Apr 16 02:34:31.182258 containerd[1580]: 2026-04-16 02:34:30.889 [INFO][4965] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b81c1890f8b4a3b70cb18ca834fb766b2a784cd554635a672204837ccc6871a2" HandleID="k8s-pod-network.b81c1890f8b4a3b70cb18ca834fb766b2a784cd554635a672204837ccc6871a2" Workload="localhost-k8s-calico--kube--controllers--7cf9694cf9--kwmsb-eth0" Apr 16 02:34:31.182258 containerd[1580]: 2026-04-16 02:34:30.909 [INFO][4965] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b81c1890f8b4a3b70cb18ca834fb766b2a784cd554635a672204837ccc6871a2" HandleID="k8s-pod-network.b81c1890f8b4a3b70cb18ca834fb766b2a784cd554635a672204837ccc6871a2" Workload="localhost-k8s-calico--kube--controllers--7cf9694cf9--kwmsb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdb20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7cf9694cf9-kwmsb", "timestamp":"2026-04-16 02:34:30.889631189 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003aa580)} Apr 16 02:34:31.182258 containerd[1580]: 2026-04-16 02:34:30.910 [INFO][4965] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 02:34:31.182258 containerd[1580]: 2026-04-16 02:34:30.911 [INFO][4965] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 02:34:31.182258 containerd[1580]: 2026-04-16 02:34:30.911 [INFO][4965] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 02:34:31.182258 containerd[1580]: 2026-04-16 02:34:30.923 [INFO][4965] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b81c1890f8b4a3b70cb18ca834fb766b2a784cd554635a672204837ccc6871a2" host="localhost" Apr 16 02:34:31.182258 containerd[1580]: 2026-04-16 02:34:31.016 [INFO][4965] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 02:34:31.182258 containerd[1580]: 2026-04-16 02:34:31.062 [INFO][4965] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 02:34:31.182258 containerd[1580]: 2026-04-16 02:34:31.073 [INFO][4965] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 02:34:31.182258 containerd[1580]: 2026-04-16 02:34:31.084 [INFO][4965] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 02:34:31.182258 containerd[1580]: 2026-04-16 02:34:31.084 [INFO][4965] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b81c1890f8b4a3b70cb18ca834fb766b2a784cd554635a672204837ccc6871a2" host="localhost" Apr 16 02:34:31.182258 containerd[1580]: 2026-04-16 02:34:31.091 [INFO][4965] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b81c1890f8b4a3b70cb18ca834fb766b2a784cd554635a672204837ccc6871a2 Apr 16 02:34:31.182258 containerd[1580]: 2026-04-16 02:34:31.110 [INFO][4965] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b81c1890f8b4a3b70cb18ca834fb766b2a784cd554635a672204837ccc6871a2" host="localhost" Apr 16 02:34:31.182258 containerd[1580]: 2026-04-16 02:34:31.130 [INFO][4965] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.b81c1890f8b4a3b70cb18ca834fb766b2a784cd554635a672204837ccc6871a2" host="localhost" Apr 16 02:34:31.182258 containerd[1580]: 2026-04-16 02:34:31.130 [INFO][4965] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.b81c1890f8b4a3b70cb18ca834fb766b2a784cd554635a672204837ccc6871a2" host="localhost" Apr 16 02:34:31.182258 containerd[1580]: 2026-04-16 02:34:31.130 [INFO][4965] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 02:34:31.182258 containerd[1580]: 2026-04-16 02:34:31.130 [INFO][4965] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="b81c1890f8b4a3b70cb18ca834fb766b2a784cd554635a672204837ccc6871a2" HandleID="k8s-pod-network.b81c1890f8b4a3b70cb18ca834fb766b2a784cd554635a672204837ccc6871a2" Workload="localhost-k8s-calico--kube--controllers--7cf9694cf9--kwmsb-eth0" Apr 16 02:34:31.183800 containerd[1580]: 2026-04-16 02:34:31.133 [INFO][4917] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b81c1890f8b4a3b70cb18ca834fb766b2a784cd554635a672204837ccc6871a2" Namespace="calico-system" Pod="calico-kube-controllers-7cf9694cf9-kwmsb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf9694cf9--kwmsb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cf9694cf9--kwmsb-eth0", GenerateName:"calico-kube-controllers-7cf9694cf9-", Namespace:"calico-system", SelfLink:"", UID:"62859b98-ece2-40d6-b938-ca7d0fe4bc04", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 33, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cf9694cf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7cf9694cf9-kwmsb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia722d393b5f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:34:31.183800 containerd[1580]: 2026-04-16 02:34:31.133 [INFO][4917] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="b81c1890f8b4a3b70cb18ca834fb766b2a784cd554635a672204837ccc6871a2" Namespace="calico-system" Pod="calico-kube-controllers-7cf9694cf9-kwmsb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf9694cf9--kwmsb-eth0" Apr 16 02:34:31.183800 containerd[1580]: 2026-04-16 02:34:31.133 [INFO][4917] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia722d393b5f ContainerID="b81c1890f8b4a3b70cb18ca834fb766b2a784cd554635a672204837ccc6871a2" Namespace="calico-system" Pod="calico-kube-controllers-7cf9694cf9-kwmsb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf9694cf9--kwmsb-eth0" Apr 16 02:34:31.183800 containerd[1580]: 2026-04-16 02:34:31.158 [INFO][4917] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b81c1890f8b4a3b70cb18ca834fb766b2a784cd554635a672204837ccc6871a2" Namespace="calico-system" Pod="calico-kube-controllers-7cf9694cf9-kwmsb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf9694cf9--kwmsb-eth0" Apr 16 02:34:31.183800 containerd[1580]: 2026-04-16 02:34:31.159 [INFO][4917] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b81c1890f8b4a3b70cb18ca834fb766b2a784cd554635a672204837ccc6871a2" Namespace="calico-system" Pod="calico-kube-controllers-7cf9694cf9-kwmsb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf9694cf9--kwmsb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cf9694cf9--kwmsb-eth0", GenerateName:"calico-kube-controllers-7cf9694cf9-", Namespace:"calico-system", SelfLink:"", UID:"62859b98-ece2-40d6-b938-ca7d0fe4bc04", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 33, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cf9694cf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b81c1890f8b4a3b70cb18ca834fb766b2a784cd554635a672204837ccc6871a2", Pod:"calico-kube-controllers-7cf9694cf9-kwmsb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia722d393b5f", MAC:"26:18:e7:5a:c6:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:34:31.183800 containerd[1580]: 2026-04-16 02:34:31.176 [INFO][4917] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b81c1890f8b4a3b70cb18ca834fb766b2a784cd554635a672204837ccc6871a2" Namespace="calico-system" Pod="calico-kube-controllers-7cf9694cf9-kwmsb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf9694cf9--kwmsb-eth0" Apr 16 02:34:31.218273 containerd[1580]: time="2026-04-16T02:34:31.217883259Z" level=info msg="connecting to shim b81c1890f8b4a3b70cb18ca834fb766b2a784cd554635a672204837ccc6871a2" address="unix:///run/containerd/s/c09e219027cf1c8335f9b6b26b9ecd22b51ab126ade53f879c847030176458a8" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:34:31.290552 systemd-networkd[1491]: cali295d7d863b9: Gained IPv6LL Apr 16 02:34:31.333909 systemd[1]: Started cri-containerd-b81c1890f8b4a3b70cb18ca834fb766b2a784cd554635a672204837ccc6871a2.scope - libcontainer container b81c1890f8b4a3b70cb18ca834fb766b2a784cd554635a672204837ccc6871a2. Apr 16 02:34:31.367960 systemd-resolved[1494]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 02:34:31.442128 containerd[1580]: time="2026-04-16T02:34:31.441809162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cf9694cf9-kwmsb,Uid:62859b98-ece2-40d6-b938-ca7d0fe4bc04,Namespace:calico-system,Attempt:0,} returns sandbox id \"b81c1890f8b4a3b70cb18ca834fb766b2a784cd554635a672204837ccc6871a2\"" Apr 16 02:34:31.658621 systemd-networkd[1491]: cali2f117522cf6: Gained IPv6LL Apr 16 02:34:31.939432 containerd[1580]: time="2026-04-16T02:34:31.939155780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:34:31.941094 containerd[1580]: time="2026-04-16T02:34:31.940715494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 16 02:34:31.944783 containerd[1580]: time="2026-04-16T02:34:31.944526238Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:34:31.953733 containerd[1580]: time="2026-04-16T02:34:31.953662591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:34:31.954507 containerd[1580]: time="2026-04-16T02:34:31.954426508Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 4.528989729s" Apr 16 02:34:31.954507 containerd[1580]: time="2026-04-16T02:34:31.954516930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 16 02:34:31.957259 containerd[1580]: time="2026-04-16T02:34:31.956363546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 02:34:31.962126 containerd[1580]: time="2026-04-16T02:34:31.961919526Z" level=info msg="CreateContainer within sandbox \"ec4938f5666a6d1a6a47de69b6c00709ad72dea48e6f5040741ffda5ebf580bc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 02:34:31.996024 containerd[1580]: time="2026-04-16T02:34:31.995754419Z" level=info msg="Container 3761d42f62688589ccf9ca9bda3d77ed7826b0723b9f1d124f4f9227ec626b0c: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:34:32.011997 containerd[1580]: time="2026-04-16T02:34:32.011563069Z" level=info msg="CreateContainer within sandbox \"ec4938f5666a6d1a6a47de69b6c00709ad72dea48e6f5040741ffda5ebf580bc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3761d42f62688589ccf9ca9bda3d77ed7826b0723b9f1d124f4f9227ec626b0c\"" Apr 16 02:34:32.014179 containerd[1580]: time="2026-04-16T02:34:32.013759852Z" level=info msg="StartContainer for \"3761d42f62688589ccf9ca9bda3d77ed7826b0723b9f1d124f4f9227ec626b0c\"" Apr 16 02:34:32.018946 containerd[1580]: time="2026-04-16T02:34:32.018903003Z" level=info msg="connecting to shim 3761d42f62688589ccf9ca9bda3d77ed7826b0723b9f1d124f4f9227ec626b0c" address="unix:///run/containerd/s/a31268da652ab0630f4d26a03cdcd2f3cbc7556b0b3b5a66c8bd159da385ef10" protocol=ttrpc version=3 Apr 16 02:34:32.101483 systemd[1]: Started cri-containerd-3761d42f62688589ccf9ca9bda3d77ed7826b0723b9f1d124f4f9227ec626b0c.scope - libcontainer container 3761d42f62688589ccf9ca9bda3d77ed7826b0723b9f1d124f4f9227ec626b0c. Apr 16 02:34:32.295736 containerd[1580]: time="2026-04-16T02:34:32.295414605Z" level=info msg="StartContainer for \"3761d42f62688589ccf9ca9bda3d77ed7826b0723b9f1d124f4f9227ec626b0c\" returns successfully" Apr 16 02:34:32.402331 containerd[1580]: time="2026-04-16T02:34:32.402266013Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:34:32.403309 containerd[1580]: time="2026-04-16T02:34:32.403173444Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 16 02:34:32.408586 containerd[1580]: time="2026-04-16T02:34:32.408143815Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 451.742141ms" Apr 16 02:34:32.408586 containerd[1580]: time="2026-04-16T02:34:32.408544111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 16 02:34:32.412807 containerd[1580]: time="2026-04-16T02:34:32.412760494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 16 02:34:32.419991 containerd[1580]: time="2026-04-16T02:34:32.419903920Z" level=info msg="CreateContainer within sandbox \"a8f6b1e774105b38ad0630b0c320d352a0816ce5b5c01e41e12486970a66bc0e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 02:34:32.503815 containerd[1580]: time="2026-04-16T02:34:32.503724608Z" level=info msg="Container 8e1221bd905b97a38e498261d0ba351ea1fa8f05ea8bbb358fc7e14bc335e630: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:34:32.518588 containerd[1580]: time="2026-04-16T02:34:32.518514035Z" level=info msg="CreateContainer within sandbox \"a8f6b1e774105b38ad0630b0c320d352a0816ce5b5c01e41e12486970a66bc0e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8e1221bd905b97a38e498261d0ba351ea1fa8f05ea8bbb358fc7e14bc335e630\"" Apr 16 02:34:32.520096 containerd[1580]: time="2026-04-16T02:34:32.519977684Z" level=info msg="StartContainer for \"8e1221bd905b97a38e498261d0ba351ea1fa8f05ea8bbb358fc7e14bc335e630\"" Apr 16 02:34:32.522029 containerd[1580]: time="2026-04-16T02:34:32.521928755Z" level=info msg="connecting to shim 8e1221bd905b97a38e498261d0ba351ea1fa8f05ea8bbb358fc7e14bc335e630" address="unix:///run/containerd/s/5f0aba82a8a9744f4b387b0956d5896103121f3797b2474286b174381bf1cd31" protocol=ttrpc version=3 Apr 16 02:34:32.548776 systemd[1]: Started cri-containerd-8e1221bd905b97a38e498261d0ba351ea1fa8f05ea8bbb358fc7e14bc335e630.scope - libcontainer container 8e1221bd905b97a38e498261d0ba351ea1fa8f05ea8bbb358fc7e14bc335e630. Apr 16 02:34:32.640292 containerd[1580]: time="2026-04-16T02:34:32.640204664Z" level=info msg="StartContainer for \"8e1221bd905b97a38e498261d0ba351ea1fa8f05ea8bbb358fc7e14bc335e630\" returns successfully" Apr 16 02:34:32.648485 kubelet[2735]: I0416 02:34:32.646802 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jc7c4" podStartSLOduration=59.64678405 podStartE2EDuration="59.64678405s" podCreationTimestamp="2026-04-16 02:33:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 02:34:31.633263217 +0000 UTC m=+66.080294694" watchObservedRunningTime="2026-04-16 02:34:32.64678405 +0000 UTC m=+67.093815533" Apr 16 02:34:32.677020 kubelet[2735]: I0416 02:34:32.676914 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-74fc6d9f9b-pdkgn" podStartSLOduration=42.145451677 podStartE2EDuration="46.676885234s" podCreationTimestamp="2026-04-16 02:33:46 +0000 UTC" firstStartedPulling="2026-04-16 02:34:27.4247362 +0000 UTC m=+61.871767703" lastFinishedPulling="2026-04-16 02:34:31.956169788 +0000 UTC m=+66.403201260" observedRunningTime="2026-04-16 02:34:32.649808596 +0000 UTC m=+67.096840077" watchObservedRunningTime="2026-04-16 02:34:32.676885234 +0000 UTC m=+67.123916715" Apr 16 02:34:33.130477 systemd-networkd[1491]: calia722d393b5f: Gained IPv6LL Apr 16 02:34:33.643795 kubelet[2735]: I0416 02:34:33.643734 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 02:34:34.352300 containerd[1580]: time="2026-04-16T02:34:34.351712508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:34:34.353273 containerd[1580]: time="2026-04-16T02:34:34.352440101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 16 02:34:34.354983 containerd[1580]: time="2026-04-16T02:34:34.354892609Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:34:34.359077 containerd[1580]: time="2026-04-16T02:34:34.358987372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:34:34.363352 containerd[1580]: time="2026-04-16T02:34:34.362808676Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.950008601s" Apr 16 02:34:34.363352 containerd[1580]: time="2026-04-16T02:34:34.362884317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 16 02:34:34.366531 containerd[1580]: time="2026-04-16T02:34:34.366493228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 16 02:34:34.371196 containerd[1580]: time="2026-04-16T02:34:34.371100443Z" level=info msg="CreateContainer within sandbox \"a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 16 02:34:34.390258 containerd[1580]: time="2026-04-16T02:34:34.389668682Z" level=info msg="Container cc2738f0bc915b4358d426ec1853ae495c9877394330aee6ce644f9791381885: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:34:34.419280 containerd[1580]: time="2026-04-16T02:34:34.418697901Z" level=info msg="CreateContainer within sandbox \"a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"cc2738f0bc915b4358d426ec1853ae495c9877394330aee6ce644f9791381885\"" Apr 16 02:34:34.424909 containerd[1580]: time="2026-04-16T02:34:34.424826066Z" level=info msg="StartContainer for \"cc2738f0bc915b4358d426ec1853ae495c9877394330aee6ce644f9791381885\"" Apr 16 02:34:34.441371 containerd[1580]: time="2026-04-16T02:34:34.440267930Z" level=info msg="connecting to shim cc2738f0bc915b4358d426ec1853ae495c9877394330aee6ce644f9791381885" address="unix:///run/containerd/s/f813b57324f6f9e0fd15c3fe5d74558e815c4c1ee895e57e5dd16dc051e26ad9" protocol=ttrpc version=3 Apr 16 02:34:34.537686 systemd[1]: Started cri-containerd-cc2738f0bc915b4358d426ec1853ae495c9877394330aee6ce644f9791381885.scope - libcontainer container cc2738f0bc915b4358d426ec1853ae495c9877394330aee6ce644f9791381885. Apr 16 02:34:34.650635 containerd[1580]: time="2026-04-16T02:34:34.650385207Z" level=info msg="StartContainer for \"cc2738f0bc915b4358d426ec1853ae495c9877394330aee6ce644f9791381885\" returns successfully" Apr 16 02:34:34.651982 kubelet[2735]: I0416 02:34:34.651903 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 02:34:36.743740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3263995207.mount: Deactivated successfully. Apr 16 02:34:37.205644 containerd[1580]: time="2026-04-16T02:34:37.205401564Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:34:37.206177 containerd[1580]: time="2026-04-16T02:34:37.206110289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 16 02:34:37.207910 containerd[1580]: time="2026-04-16T02:34:37.207854893Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:34:37.212638 containerd[1580]: time="2026-04-16T02:34:37.212545651Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:34:37.213735 containerd[1580]: time="2026-04-16T02:34:37.213629213Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.847084498s" Apr 16 02:34:37.213735 containerd[1580]: time="2026-04-16T02:34:37.213689854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 16 02:34:37.215429 containerd[1580]: time="2026-04-16T02:34:37.215357011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 16 02:34:37.222272 containerd[1580]: time="2026-04-16T02:34:37.222199135Z" level=info msg="CreateContainer within sandbox \"e568a928abb9cc71a4e5e0f2bfb444e6b163c1ef314fd0544ac350bcf300aa15\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 16 02:34:37.235029 containerd[1580]: time="2026-04-16T02:34:37.234648961Z" level=info msg="Container 331441659b7b40c116126267b4b6fb88a76f85ffa961b7a61a24e9317d6d0b2b: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:34:37.271577 containerd[1580]: time="2026-04-16T02:34:37.271486627Z" level=info msg="CreateContainer within sandbox \"e568a928abb9cc71a4e5e0f2bfb444e6b163c1ef314fd0544ac350bcf300aa15\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"331441659b7b40c116126267b4b6fb88a76f85ffa961b7a61a24e9317d6d0b2b\"" Apr 16 02:34:37.272489 containerd[1580]: time="2026-04-16T02:34:37.272444411Z" level=info msg="StartContainer for \"331441659b7b40c116126267b4b6fb88a76f85ffa961b7a61a24e9317d6d0b2b\"" Apr 16 02:34:37.274256 containerd[1580]: time="2026-04-16T02:34:37.274058319Z" level=info msg="connecting to shim 331441659b7b40c116126267b4b6fb88a76f85ffa961b7a61a24e9317d6d0b2b" address="unix:///run/containerd/s/9ab5441f7401cc7fe92cbb19ec26537075c00ed9f7a94b759de337bbf2f417c6" protocol=ttrpc version=3 Apr 16 02:34:37.309989 systemd[1]: Started cri-containerd-331441659b7b40c116126267b4b6fb88a76f85ffa961b7a61a24e9317d6d0b2b.scope - libcontainer container 331441659b7b40c116126267b4b6fb88a76f85ffa961b7a61a24e9317d6d0b2b. Apr 16 02:34:37.458161 containerd[1580]: time="2026-04-16T02:34:37.457983870Z" level=info msg="StartContainer for \"331441659b7b40c116126267b4b6fb88a76f85ffa961b7a61a24e9317d6d0b2b\" returns successfully" Apr 16 02:34:37.809960 kubelet[2735]: I0416 02:34:37.806814 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-kbk5r" podStartSLOduration=44.619868364 podStartE2EDuration="51.806780667s" podCreationTimestamp="2026-04-16 02:33:46 +0000 UTC" firstStartedPulling="2026-04-16 02:34:30.028068506 +0000 UTC m=+64.475099981" lastFinishedPulling="2026-04-16 02:34:37.214980801 +0000 UTC m=+71.662012284" observedRunningTime="2026-04-16 02:34:37.805910717 +0000 UTC m=+72.252942201" watchObservedRunningTime="2026-04-16 02:34:37.806780667 +0000 UTC m=+72.253812154" Apr 16 02:34:37.809960 kubelet[2735]: I0416 02:34:37.807399 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-74fc6d9f9b-xt2mx" podStartSLOduration=47.008287523 podStartE2EDuration="51.807341967s" podCreationTimestamp="2026-04-16 02:33:46 +0000 UTC" firstStartedPulling="2026-04-16 02:34:27.612583177 +0000 UTC m=+62.059614650" lastFinishedPulling="2026-04-16 02:34:32.411637621 +0000 UTC m=+66.858669094" observedRunningTime="2026-04-16 02:34:33.680747578 +0000 UTC m=+68.127779054" watchObservedRunningTime="2026-04-16 02:34:37.807341967 +0000 UTC m=+72.254373458" Apr 16 02:34:39.719067 containerd[1580]: time="2026-04-16T02:34:39.718973159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:34:39.719865 containerd[1580]: time="2026-04-16T02:34:39.719806653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 16 02:34:39.721198 containerd[1580]: time="2026-04-16T02:34:39.721141275Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:34:39.726264 containerd[1580]: time="2026-04-16T02:34:39.725676047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:34:39.726659 containerd[1580]: time="2026-04-16T02:34:39.726608944Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.511156101s" Apr 16 02:34:39.726659 containerd[1580]: time="2026-04-16T02:34:39.726656700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 16 02:34:39.727425 containerd[1580]: time="2026-04-16T02:34:39.727394581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 16 02:34:39.813007 containerd[1580]: time="2026-04-16T02:34:39.812949479Z" level=info msg="CreateContainer within sandbox \"b81c1890f8b4a3b70cb18ca834fb766b2a784cd554635a672204837ccc6871a2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 16 02:34:39.825609 containerd[1580]: time="2026-04-16T02:34:39.825538364Z" level=info msg="Container 12a65f40b40c5315e55729edd4a9c5fe5af76259665f0ebe4a0a4283eb6dffd7: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:34:39.838122 containerd[1580]: time="2026-04-16T02:34:39.838001162Z" level=info msg="CreateContainer within sandbox \"b81c1890f8b4a3b70cb18ca834fb766b2a784cd554635a672204837ccc6871a2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"12a65f40b40c5315e55729edd4a9c5fe5af76259665f0ebe4a0a4283eb6dffd7\"" Apr 16 02:34:39.840852 containerd[1580]: time="2026-04-16T02:34:39.839385630Z" level=info msg="StartContainer for \"12a65f40b40c5315e55729edd4a9c5fe5af76259665f0ebe4a0a4283eb6dffd7\"" Apr 16 02:34:39.840852 containerd[1580]: time="2026-04-16T02:34:39.840709941Z" level=info msg="connecting to shim 12a65f40b40c5315e55729edd4a9c5fe5af76259665f0ebe4a0a4283eb6dffd7" address="unix:///run/containerd/s/c09e219027cf1c8335f9b6b26b9ecd22b51ab126ade53f879c847030176458a8" protocol=ttrpc version=3 Apr 16 02:34:39.870539 systemd[1]: Started cri-containerd-12a65f40b40c5315e55729edd4a9c5fe5af76259665f0ebe4a0a4283eb6dffd7.scope - libcontainer container 12a65f40b40c5315e55729edd4a9c5fe5af76259665f0ebe4a0a4283eb6dffd7. Apr 16 02:34:39.946278 containerd[1580]: time="2026-04-16T02:34:39.946206430Z" level=info msg="StartContainer for \"12a65f40b40c5315e55729edd4a9c5fe5af76259665f0ebe4a0a4283eb6dffd7\" returns successfully" Apr 16 02:34:40.852379 kubelet[2735]: I0416 02:34:40.852166 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7cf9694cf9-kwmsb" podStartSLOduration=44.573235487 podStartE2EDuration="52.852142578s" podCreationTimestamp="2026-04-16 02:33:48 +0000 UTC" firstStartedPulling="2026-04-16 02:34:31.448401361 +0000 UTC m=+65.895432835" lastFinishedPulling="2026-04-16 02:34:39.727308453 +0000 UTC m=+74.174339926" observedRunningTime="2026-04-16 02:34:40.7665159 +0000 UTC m=+75.213547390" watchObservedRunningTime="2026-04-16 02:34:40.852142578 +0000 UTC m=+75.299174071" Apr 16 02:34:42.124519 containerd[1580]: time="2026-04-16T02:34:42.124336260Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:34:42.125899 containerd[1580]: time="2026-04-16T02:34:42.125820002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 16 02:34:42.127865 containerd[1580]: time="2026-04-16T02:34:42.127738679Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:34:42.131438 containerd[1580]: time="2026-04-16T02:34:42.131386646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:34:42.132324 containerd[1580]: time="2026-04-16T02:34:42.132269163Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.404835354s" Apr 16 02:34:42.132373 containerd[1580]: time="2026-04-16T02:34:42.132325687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 16 02:34:42.147971 containerd[1580]: time="2026-04-16T02:34:42.147421752Z" level=info msg="CreateContainer within sandbox \"a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 16 02:34:42.229823 containerd[1580]: time="2026-04-16T02:34:42.229720009Z" level=info msg="Container 3b42178cd83a3ce2c55de1753460f2d2ea9125e3bafc3be078cd86f160f6a42e: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:34:42.248525 containerd[1580]: time="2026-04-16T02:34:42.248441925Z" level=info msg="CreateContainer within sandbox \"a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3b42178cd83a3ce2c55de1753460f2d2ea9125e3bafc3be078cd86f160f6a42e\"" Apr 16 02:34:42.252638 containerd[1580]: time="2026-04-16T02:34:42.252560313Z" level=info msg="StartContainer for \"3b42178cd83a3ce2c55de1753460f2d2ea9125e3bafc3be078cd86f160f6a42e\"" Apr 16 02:34:42.255391 containerd[1580]: time="2026-04-16T02:34:42.255313649Z" level=info msg="connecting to shim 3b42178cd83a3ce2c55de1753460f2d2ea9125e3bafc3be078cd86f160f6a42e" address="unix:///run/containerd/s/f813b57324f6f9e0fd15c3fe5d74558e815c4c1ee895e57e5dd16dc051e26ad9" protocol=ttrpc version=3 Apr 16 02:34:42.288915 systemd[1]: Started cri-containerd-3b42178cd83a3ce2c55de1753460f2d2ea9125e3bafc3be078cd86f160f6a42e.scope - libcontainer container 3b42178cd83a3ce2c55de1753460f2d2ea9125e3bafc3be078cd86f160f6a42e. Apr 16 02:34:42.391984 containerd[1580]: time="2026-04-16T02:34:42.390675251Z" level=info msg="StartContainer for \"3b42178cd83a3ce2c55de1753460f2d2ea9125e3bafc3be078cd86f160f6a42e\" returns successfully" Apr 16 02:34:42.829010 kubelet[2735]: I0416 02:34:42.827143 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2l7kg" podStartSLOduration=41.644700969 podStartE2EDuration="54.827106342s" podCreationTimestamp="2026-04-16 02:33:48 +0000 UTC" firstStartedPulling="2026-04-16 02:34:28.950875738 +0000 UTC m=+63.397907214" lastFinishedPulling="2026-04-16 02:34:42.133281114 +0000 UTC m=+76.580312587" observedRunningTime="2026-04-16 02:34:42.825007625 +0000 UTC m=+77.272039107" watchObservedRunningTime="2026-04-16 02:34:42.827106342 +0000 UTC m=+77.274137868" Apr 16 02:34:42.906855 kubelet[2735]: I0416 02:34:42.906711 2735 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 16 02:34:42.908886 kubelet[2735]: I0416 02:34:42.908492 2735 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 16 02:34:52.159177 kubelet[2735]: I0416 02:34:52.159107 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 02:35:06.319424 kubelet[2735]: I0416 02:35:06.317836 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 02:35:22.623107 systemd[1]: Started sshd@7-10.0.0.48:22-10.0.0.1:39596.service - OpenSSH per-connection server daemon (10.0.0.1:39596). Apr 16 02:35:22.803946 sshd[5602]: Accepted publickey for core from 10.0.0.1 port 39596 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:35:22.812481 sshd-session[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:35:22.833339 systemd-logind[1562]: New session 8 of user core. Apr 16 02:35:22.839558 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 16 02:35:23.591520 sshd[5605]: Connection closed by 10.0.0.1 port 39596 Apr 16 02:35:23.594041 sshd-session[5602]: pam_unix(sshd:session): session closed for user core Apr 16 02:35:23.609985 systemd-logind[1562]: Session 8 logged out. Waiting for processes to exit. Apr 16 02:35:23.610158 systemd[1]: sshd@7-10.0.0.48:22-10.0.0.1:39596.service: Deactivated successfully. Apr 16 02:35:23.613809 systemd[1]: session-8.scope: Deactivated successfully. Apr 16 02:35:23.621595 systemd-logind[1562]: Removed session 8. Apr 16 02:35:28.623583 systemd[1]: Started sshd@8-10.0.0.48:22-10.0.0.1:55092.service - OpenSSH per-connection server daemon (10.0.0.1:55092). Apr 16 02:35:28.695754 sshd[5632]: Accepted publickey for core from 10.0.0.1 port 55092 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:35:28.697530 sshd-session[5632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:35:28.704501 systemd-logind[1562]: New session 9 of user core. Apr 16 02:35:28.718804 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 16 02:35:28.945649 sshd[5636]: Connection closed by 10.0.0.1 port 55092 Apr 16 02:35:28.947556 sshd-session[5632]: pam_unix(sshd:session): session closed for user core Apr 16 02:35:28.958319 systemd[1]: sshd@8-10.0.0.48:22-10.0.0.1:55092.service: Deactivated successfully. Apr 16 02:35:28.960856 systemd[1]: session-9.scope: Deactivated successfully. Apr 16 02:35:28.963127 systemd-logind[1562]: Session 9 logged out. Waiting for processes to exit. Apr 16 02:35:28.966757 systemd-logind[1562]: Removed session 9. Apr 16 02:35:33.957622 systemd[1]: Started sshd@9-10.0.0.48:22-10.0.0.1:55098.service - OpenSSH per-connection server daemon (10.0.0.1:55098). Apr 16 02:35:34.030761 sshd[5650]: Accepted publickey for core from 10.0.0.1 port 55098 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:35:34.032444 sshd-session[5650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:35:34.040175 systemd-logind[1562]: New session 10 of user core. Apr 16 02:35:34.051548 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 16 02:35:34.270174 sshd[5653]: Connection closed by 10.0.0.1 port 55098 Apr 16 02:35:34.271447 sshd-session[5650]: pam_unix(sshd:session): session closed for user core Apr 16 02:35:34.275618 systemd[1]: sshd@9-10.0.0.48:22-10.0.0.1:55098.service: Deactivated successfully. Apr 16 02:35:34.278387 systemd[1]: session-10.scope: Deactivated successfully. Apr 16 02:35:34.279739 systemd-logind[1562]: Session 10 logged out. Waiting for processes to exit. Apr 16 02:35:34.281972 systemd-logind[1562]: Removed session 10. Apr 16 02:35:39.313014 systemd[1]: Started sshd@10-10.0.0.48:22-10.0.0.1:59136.service - OpenSSH per-connection server daemon (10.0.0.1:59136). Apr 16 02:35:39.441796 sshd[5691]: Accepted publickey for core from 10.0.0.1 port 59136 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:35:39.446972 sshd-session[5691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:35:39.519785 systemd-logind[1562]: New session 11 of user core. Apr 16 02:35:39.531526 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 16 02:35:39.864559 sshd[5694]: Connection closed by 10.0.0.1 port 59136 Apr 16 02:35:39.865722 sshd-session[5691]: pam_unix(sshd:session): session closed for user core Apr 16 02:35:39.873202 systemd[1]: sshd@10-10.0.0.48:22-10.0.0.1:59136.service: Deactivated successfully. Apr 16 02:35:39.877016 systemd[1]: session-11.scope: Deactivated successfully. Apr 16 02:35:39.880111 systemd-logind[1562]: Session 11 logged out. Waiting for processes to exit. Apr 16 02:35:39.882995 systemd-logind[1562]: Removed session 11. Apr 16 02:35:44.924784 systemd[1]: Started sshd@11-10.0.0.48:22-10.0.0.1:59150.service - OpenSSH per-connection server daemon (10.0.0.1:59150). Apr 16 02:35:45.036434 sshd[5740]: Accepted publickey for core from 10.0.0.1 port 59150 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:35:45.040629 sshd-session[5740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:35:45.083059 systemd-logind[1562]: New session 12 of user core. Apr 16 02:35:45.093659 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 16 02:35:45.398339 sshd[5743]: Connection closed by 10.0.0.1 port 59150 Apr 16 02:35:45.398725 sshd-session[5740]: pam_unix(sshd:session): session closed for user core Apr 16 02:35:45.407471 systemd[1]: sshd@11-10.0.0.48:22-10.0.0.1:59150.service: Deactivated successfully. Apr 16 02:35:45.412933 systemd[1]: session-12.scope: Deactivated successfully. Apr 16 02:35:45.417611 systemd-logind[1562]: Session 12 logged out. Waiting for processes to exit. Apr 16 02:35:45.420592 systemd-logind[1562]: Removed session 12. Apr 16 02:35:50.422850 systemd[1]: Started sshd@12-10.0.0.48:22-10.0.0.1:41458.service - OpenSSH per-connection server daemon (10.0.0.1:41458). Apr 16 02:35:50.689877 sshd[5784]: Accepted publickey for core from 10.0.0.1 port 41458 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:35:50.692849 sshd-session[5784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:35:50.706432 systemd-logind[1562]: New session 13 of user core. Apr 16 02:35:50.714443 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 16 02:35:51.026046 sshd[5787]: Connection closed by 10.0.0.1 port 41458 Apr 16 02:35:51.028578 sshd-session[5784]: pam_unix(sshd:session): session closed for user core Apr 16 02:35:51.046882 systemd[1]: sshd@12-10.0.0.48:22-10.0.0.1:41458.service: Deactivated successfully. Apr 16 02:35:51.072029 systemd[1]: session-13.scope: Deactivated successfully. Apr 16 02:35:51.075955 systemd-logind[1562]: Session 13 logged out. Waiting for processes to exit. Apr 16 02:35:51.078471 systemd-logind[1562]: Removed session 13. Apr 16 02:35:56.065566 systemd[1]: Started sshd@13-10.0.0.48:22-10.0.0.1:51154.service - OpenSSH per-connection server daemon (10.0.0.1:51154). Apr 16 02:35:56.211724 sshd[5847]: Accepted publickey for core from 10.0.0.1 port 51154 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:35:56.215176 sshd-session[5847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:35:56.241189 systemd-logind[1562]: New session 14 of user core. Apr 16 02:35:56.258628 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 16 02:35:56.590686 sshd[5850]: Connection closed by 10.0.0.1 port 51154 Apr 16 02:35:56.591358 sshd-session[5847]: pam_unix(sshd:session): session closed for user core Apr 16 02:35:56.597825 systemd[1]: sshd@13-10.0.0.48:22-10.0.0.1:51154.service: Deactivated successfully. Apr 16 02:35:56.603394 systemd[1]: session-14.scope: Deactivated successfully. Apr 16 02:35:56.607339 systemd-logind[1562]: Session 14 logged out. Waiting for processes to exit. Apr 16 02:35:56.610454 systemd-logind[1562]: Removed session 14. Apr 16 02:36:01.613999 systemd[1]: Started sshd@14-10.0.0.48:22-10.0.0.1:51158.service - OpenSSH per-connection server daemon (10.0.0.1:51158). Apr 16 02:36:01.724306 sshd[5881]: Accepted publickey for core from 10.0.0.1 port 51158 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:36:01.727729 sshd-session[5881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:36:01.735887 systemd-logind[1562]: New session 15 of user core. Apr 16 02:36:01.747790 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 16 02:36:01.959206 sshd[5884]: Connection closed by 10.0.0.1 port 51158 Apr 16 02:36:01.959746 sshd-session[5881]: pam_unix(sshd:session): session closed for user core Apr 16 02:36:01.963678 systemd[1]: sshd@14-10.0.0.48:22-10.0.0.1:51158.service: Deactivated successfully. Apr 16 02:36:01.965843 systemd[1]: session-15.scope: Deactivated successfully. Apr 16 02:36:01.966492 systemd-logind[1562]: Session 15 logged out. Waiting for processes to exit. Apr 16 02:36:01.968074 systemd-logind[1562]: Removed session 15. Apr 16 02:36:07.005456 systemd[1]: Started sshd@15-10.0.0.48:22-10.0.0.1:50846.service - OpenSSH per-connection server daemon (10.0.0.1:50846). Apr 16 02:36:07.122570 sshd[5901]: Accepted publickey for core from 10.0.0.1 port 50846 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:36:07.125140 sshd-session[5901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:36:07.138647 systemd-logind[1562]: New session 16 of user core. Apr 16 02:36:07.148005 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 16 02:36:07.444791 sshd[5904]: Connection closed by 10.0.0.1 port 50846 Apr 16 02:36:07.445288 sshd-session[5901]: pam_unix(sshd:session): session closed for user core Apr 16 02:36:07.453137 systemd[1]: sshd@15-10.0.0.48:22-10.0.0.1:50846.service: Deactivated successfully. Apr 16 02:36:07.455210 systemd[1]: session-16.scope: Deactivated successfully. Apr 16 02:36:07.456606 systemd-logind[1562]: Session 16 logged out. Waiting for processes to exit. Apr 16 02:36:07.457976 systemd-logind[1562]: Removed session 16. Apr 16 02:36:12.531402 systemd[1]: Started sshd@16-10.0.0.48:22-10.0.0.1:50852.service - OpenSSH per-connection server daemon (10.0.0.1:50852). Apr 16 02:36:12.640070 sshd[5965]: Accepted publickey for core from 10.0.0.1 port 50852 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:36:12.643421 sshd-session[5965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:36:12.679905 systemd-logind[1562]: New session 17 of user core. Apr 16 02:36:12.695102 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 16 02:36:13.013841 sshd[5968]: Connection closed by 10.0.0.1 port 50852 Apr 16 02:36:13.014397 sshd-session[5965]: pam_unix(sshd:session): session closed for user core Apr 16 02:36:13.020853 systemd[1]: sshd@16-10.0.0.48:22-10.0.0.1:50852.service: Deactivated successfully. Apr 16 02:36:13.042171 systemd[1]: session-17.scope: Deactivated successfully. Apr 16 02:36:13.070335 systemd-logind[1562]: Session 17 logged out. Waiting for processes to exit. Apr 16 02:36:13.081810 systemd-logind[1562]: Removed session 17. Apr 16 02:36:18.042167 systemd[1]: Started sshd@17-10.0.0.48:22-10.0.0.1:39248.service - OpenSSH per-connection server daemon (10.0.0.1:39248). Apr 16 02:36:18.207954 sshd[6006]: Accepted publickey for core from 10.0.0.1 port 39248 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:36:18.208168 sshd-session[6006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:36:18.232877 systemd-logind[1562]: New session 18 of user core. Apr 16 02:36:18.243399 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 16 02:36:18.619890 sshd[6009]: Connection closed by 10.0.0.1 port 39248 Apr 16 02:36:18.622047 sshd-session[6006]: pam_unix(sshd:session): session closed for user core Apr 16 02:36:18.627547 systemd[1]: sshd@17-10.0.0.48:22-10.0.0.1:39248.service: Deactivated successfully. Apr 16 02:36:18.636488 systemd[1]: session-18.scope: Deactivated successfully. Apr 16 02:36:18.639932 systemd-logind[1562]: Session 18 logged out. Waiting for processes to exit. Apr 16 02:36:18.645120 systemd-logind[1562]: Removed session 18. Apr 16 02:36:23.714392 systemd[1]: Started sshd@18-10.0.0.48:22-10.0.0.1:39254.service - OpenSSH per-connection server daemon (10.0.0.1:39254). Apr 16 02:36:23.992915 sshd[6049]: Accepted publickey for core from 10.0.0.1 port 39254 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:36:23.996728 sshd-session[6049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:36:24.033166 systemd-logind[1562]: New session 19 of user core. Apr 16 02:36:24.041867 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 16 02:36:24.414463 sshd[6052]: Connection closed by 10.0.0.1 port 39254 Apr 16 02:36:24.415839 sshd-session[6049]: pam_unix(sshd:session): session closed for user core Apr 16 02:36:24.427694 systemd[1]: sshd@18-10.0.0.48:22-10.0.0.1:39254.service: Deactivated successfully. Apr 16 02:36:24.433980 systemd[1]: session-19.scope: Deactivated successfully. Apr 16 02:36:24.438687 systemd-logind[1562]: Session 19 logged out. Waiting for processes to exit. Apr 16 02:36:24.443423 systemd-logind[1562]: Removed session 19. Apr 16 02:36:29.447389 systemd[1]: Started sshd@19-10.0.0.48:22-10.0.0.1:38356.service - OpenSSH per-connection server daemon (10.0.0.1:38356). Apr 16 02:36:29.646860 sshd[6069]: Accepted publickey for core from 10.0.0.1 port 38356 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:36:29.664827 sshd-session[6069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:36:29.675847 systemd-logind[1562]: New session 20 of user core. Apr 16 02:36:29.682899 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 16 02:36:30.086880 sshd[6072]: Connection closed by 10.0.0.1 port 38356 Apr 16 02:36:30.087028 sshd-session[6069]: pam_unix(sshd:session): session closed for user core Apr 16 02:36:30.095115 systemd[1]: sshd@19-10.0.0.48:22-10.0.0.1:38356.service: Deactivated successfully. Apr 16 02:36:30.101078 systemd[1]: session-20.scope: Deactivated successfully. Apr 16 02:36:30.104290 systemd-logind[1562]: Session 20 logged out. Waiting for processes to exit. Apr 16 02:36:30.107921 systemd-logind[1562]: Removed session 20. Apr 16 02:36:35.110127 systemd[1]: Started sshd@20-10.0.0.48:22-10.0.0.1:38366.service - OpenSSH per-connection server daemon (10.0.0.1:38366). Apr 16 02:36:35.345330 sshd[6094]: Accepted publickey for core from 10.0.0.1 port 38366 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:36:35.348199 sshd-session[6094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:36:35.366019 systemd-logind[1562]: New session 21 of user core. Apr 16 02:36:35.379996 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 16 02:36:35.690433 kubelet[2735]: E0416 02:36:35.648351 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:35.761546 sshd[6097]: Connection closed by 10.0.0.1 port 38366 Apr 16 02:36:35.762049 sshd-session[6094]: pam_unix(sshd:session): session closed for user core Apr 16 02:36:35.769465 systemd[1]: sshd@20-10.0.0.48:22-10.0.0.1:38366.service: Deactivated successfully. Apr 16 02:36:35.775149 systemd[1]: session-21.scope: Deactivated successfully. Apr 16 02:36:35.777520 systemd-logind[1562]: Session 21 logged out. Waiting for processes to exit. Apr 16 02:36:35.781354 systemd-logind[1562]: Removed session 21. Apr 16 02:36:40.648559 kubelet[2735]: E0416 02:36:40.647373 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:40.811197 systemd[1]: Started sshd@21-10.0.0.48:22-10.0.0.1:37460.service - OpenSSH per-connection server daemon (10.0.0.1:37460). Apr 16 02:36:40.931858 sshd[6151]: Accepted publickey for core from 10.0.0.1 port 37460 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:36:40.940676 sshd-session[6151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:36:41.022880 systemd-logind[1562]: New session 22 of user core. Apr 16 02:36:41.031281 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 16 02:36:41.317799 sshd[6159]: Connection closed by 10.0.0.1 port 37460 Apr 16 02:36:41.319296 sshd-session[6151]: pam_unix(sshd:session): session closed for user core Apr 16 02:36:41.333176 systemd[1]: sshd@21-10.0.0.48:22-10.0.0.1:37460.service: Deactivated successfully. Apr 16 02:36:41.337548 systemd[1]: session-22.scope: Deactivated successfully. Apr 16 02:36:41.341866 systemd-logind[1562]: Session 22 logged out. Waiting for processes to exit. Apr 16 02:36:41.346332 systemd-logind[1562]: Removed session 22. Apr 16 02:36:46.348042 systemd[1]: Started sshd@22-10.0.0.48:22-10.0.0.1:57330.service - OpenSSH per-connection server daemon (10.0.0.1:57330). Apr 16 02:36:46.510824 sshd[6174]: Accepted publickey for core from 10.0.0.1 port 57330 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:36:46.514089 sshd-session[6174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:36:46.527320 systemd-logind[1562]: New session 23 of user core. Apr 16 02:36:46.539952 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 16 02:36:46.676462 kubelet[2735]: E0416 02:36:46.676157 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:46.845188 sshd[6177]: Connection closed by 10.0.0.1 port 57330 Apr 16 02:36:46.848015 sshd-session[6174]: pam_unix(sshd:session): session closed for user core Apr 16 02:36:46.856389 systemd[1]: sshd@22-10.0.0.48:22-10.0.0.1:57330.service: Deactivated successfully. Apr 16 02:36:46.859926 systemd[1]: session-23.scope: Deactivated successfully. Apr 16 02:36:46.862691 systemd-logind[1562]: Session 23 logged out. Waiting for processes to exit. Apr 16 02:36:46.867568 systemd[1]: Started sshd@23-10.0.0.48:22-10.0.0.1:57346.service - OpenSSH per-connection server daemon (10.0.0.1:57346). Apr 16 02:36:46.869658 systemd-logind[1562]: Removed session 23. Apr 16 02:36:47.033030 sshd[6199]: Accepted publickey for core from 10.0.0.1 port 57346 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:36:47.034440 sshd-session[6199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:36:47.045002 systemd-logind[1562]: New session 24 of user core. Apr 16 02:36:47.053740 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 16 02:36:47.476712 sshd[6202]: Connection closed by 10.0.0.1 port 57346 Apr 16 02:36:47.477912 sshd-session[6199]: pam_unix(sshd:session): session closed for user core Apr 16 02:36:47.497121 systemd[1]: Started sshd@24-10.0.0.48:22-10.0.0.1:57350.service - OpenSSH per-connection server daemon (10.0.0.1:57350). Apr 16 02:36:47.498807 systemd[1]: sshd@23-10.0.0.48:22-10.0.0.1:57346.service: Deactivated successfully. Apr 16 02:36:47.503204 systemd[1]: session-24.scope: Deactivated successfully. Apr 16 02:36:47.526350 systemd-logind[1562]: Session 24 logged out. Waiting for processes to exit. Apr 16 02:36:47.529632 systemd-logind[1562]: Removed session 24. Apr 16 02:36:47.690285 sshd[6211]: Accepted publickey for core from 10.0.0.1 port 57350 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:36:47.693444 sshd-session[6211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:36:47.704659 systemd-logind[1562]: New session 25 of user core. Apr 16 02:36:47.710378 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 16 02:36:48.047931 sshd[6217]: Connection closed by 10.0.0.1 port 57350 Apr 16 02:36:48.048538 sshd-session[6211]: pam_unix(sshd:session): session closed for user core Apr 16 02:36:48.079057 systemd[1]: sshd@24-10.0.0.48:22-10.0.0.1:57350.service: Deactivated successfully. Apr 16 02:36:48.084387 systemd[1]: session-25.scope: Deactivated successfully. Apr 16 02:36:48.087246 systemd-logind[1562]: Session 25 logged out. Waiting for processes to exit. Apr 16 02:36:48.092660 systemd-logind[1562]: Removed session 25. Apr 16 02:36:53.112304 systemd[1]: Started sshd@25-10.0.0.48:22-10.0.0.1:57356.service - OpenSSH per-connection server daemon (10.0.0.1:57356). Apr 16 02:36:53.231274 sshd[6288]: Accepted publickey for core from 10.0.0.1 port 57356 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:36:53.233618 sshd-session[6288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:36:53.292520 systemd-logind[1562]: New session 26 of user core. Apr 16 02:36:53.302677 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 16 02:36:53.727283 sshd[6291]: Connection closed by 10.0.0.1 port 57356 Apr 16 02:36:53.726868 sshd-session[6288]: pam_unix(sshd:session): session closed for user core Apr 16 02:36:53.740755 systemd[1]: sshd@25-10.0.0.48:22-10.0.0.1:57356.service: Deactivated successfully. Apr 16 02:36:53.756729 systemd[1]: session-26.scope: Deactivated successfully. Apr 16 02:36:53.760105 systemd-logind[1562]: Session 26 logged out. Waiting for processes to exit. Apr 16 02:36:53.764062 systemd-logind[1562]: Removed session 26. Apr 16 02:36:58.789903 systemd[1]: Started sshd@26-10.0.0.48:22-10.0.0.1:46630.service - OpenSSH per-connection server daemon (10.0.0.1:46630). Apr 16 02:36:58.925803 sshd[6306]: Accepted publickey for core from 10.0.0.1 port 46630 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:36:58.934004 sshd-session[6306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:36:58.998344 systemd-logind[1562]: New session 27 of user core. Apr 16 02:36:59.030384 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 16 02:36:59.534646 sshd[6309]: Connection closed by 10.0.0.1 port 46630 Apr 16 02:36:59.534891 sshd-session[6306]: pam_unix(sshd:session): session closed for user core Apr 16 02:36:59.548118 systemd[1]: sshd@26-10.0.0.48:22-10.0.0.1:46630.service: Deactivated successfully. Apr 16 02:36:59.585956 systemd[1]: session-27.scope: Deactivated successfully. Apr 16 02:36:59.600699 systemd-logind[1562]: Session 27 logged out. Waiting for processes to exit. Apr 16 02:36:59.609523 systemd-logind[1562]: Removed session 27. Apr 16 02:37:04.599575 systemd[1]: Started sshd@27-10.0.0.48:22-10.0.0.1:46634.service - OpenSSH per-connection server daemon (10.0.0.1:46634). Apr 16 02:37:04.872856 sshd[6334]: Accepted publickey for core from 10.0.0.1 port 46634 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:37:04.884527 sshd-session[6334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:37:04.944979 systemd-logind[1562]: New session 28 of user core. Apr 16 02:37:05.017193 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 16 02:37:05.462830 sshd[6337]: Connection closed by 10.0.0.1 port 46634 Apr 16 02:37:05.464748 sshd-session[6334]: pam_unix(sshd:session): session closed for user core Apr 16 02:37:05.479623 systemd[1]: sshd@27-10.0.0.48:22-10.0.0.1:46634.service: Deactivated successfully. Apr 16 02:37:05.483913 systemd[1]: session-28.scope: Deactivated successfully. Apr 16 02:37:05.488477 systemd-logind[1562]: Session 28 logged out. Waiting for processes to exit. Apr 16 02:37:05.491482 systemd-logind[1562]: Removed session 28. Apr 16 02:37:10.491318 systemd[1]: Started sshd@28-10.0.0.48:22-10.0.0.1:47184.service - OpenSSH per-connection server daemon (10.0.0.1:47184). Apr 16 02:37:10.759464 sshd[6374]: Accepted publickey for core from 10.0.0.1 port 47184 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:37:10.762558 sshd-session[6374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:37:10.781827 systemd-logind[1562]: New session 29 of user core. Apr 16 02:37:10.795957 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 16 02:37:11.227209 sshd[6389]: Connection closed by 10.0.0.1 port 47184 Apr 16 02:37:11.229539 sshd-session[6374]: pam_unix(sshd:session): session closed for user core Apr 16 02:37:11.250515 systemd[1]: sshd@28-10.0.0.48:22-10.0.0.1:47184.service: Deactivated successfully. Apr 16 02:37:11.260252 systemd[1]: session-29.scope: Deactivated successfully. Apr 16 02:37:11.263430 systemd-logind[1562]: Session 29 logged out. Waiting for processes to exit. Apr 16 02:37:11.269639 systemd-logind[1562]: Removed session 29. Apr 16 02:37:14.673607 kubelet[2735]: E0416 02:37:14.673351 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:37:16.257184 systemd[1]: Started sshd@29-10.0.0.48:22-10.0.0.1:40762.service - OpenSSH per-connection server daemon (10.0.0.1:40762). Apr 16 02:37:16.455832 sshd[6417]: Accepted publickey for core from 10.0.0.1 port 40762 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:37:16.464844 sshd-session[6417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:37:16.519148 systemd-logind[1562]: New session 30 of user core. Apr 16 02:37:16.524885 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 16 02:37:17.070201 sshd[6420]: Connection closed by 10.0.0.1 port 40762 Apr 16 02:37:17.072722 sshd-session[6417]: pam_unix(sshd:session): session closed for user core Apr 16 02:37:17.090903 systemd[1]: sshd@29-10.0.0.48:22-10.0.0.1:40762.service: Deactivated successfully. Apr 16 02:37:17.096107 systemd[1]: session-30.scope: Deactivated successfully. Apr 16 02:37:17.103957 systemd-logind[1562]: Session 30 logged out. Waiting for processes to exit. Apr 16 02:37:17.109160 systemd-logind[1562]: Removed session 30. Apr 16 02:37:21.693703 kubelet[2735]: E0416 02:37:21.693628 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:37:22.132550 systemd[1]: Started sshd@30-10.0.0.48:22-10.0.0.1:40766.service - OpenSSH per-connection server daemon (10.0.0.1:40766). Apr 16 02:37:22.491841 sshd[6486]: Accepted publickey for core from 10.0.0.1 port 40766 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:37:22.494338 sshd-session[6486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:37:22.514943 systemd-logind[1562]: New session 31 of user core. Apr 16 02:37:22.527092 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 16 02:37:23.108793 sshd[6490]: Connection closed by 10.0.0.1 port 40766 Apr 16 02:37:23.109909 sshd-session[6486]: pam_unix(sshd:session): session closed for user core Apr 16 02:37:23.124250 systemd[1]: sshd@30-10.0.0.48:22-10.0.0.1:40766.service: Deactivated successfully. Apr 16 02:37:23.133035 systemd[1]: session-31.scope: Deactivated successfully. Apr 16 02:37:23.137837 systemd-logind[1562]: Session 31 logged out. Waiting for processes to exit. Apr 16 02:37:23.144896 systemd-logind[1562]: Removed session 31. Apr 16 02:37:23.682501 kubelet[2735]: E0416 02:37:23.682422 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:37:28.204180 systemd[1]: Started sshd@31-10.0.0.48:22-10.0.0.1:43668.service - OpenSSH per-connection server daemon (10.0.0.1:43668). Apr 16 02:37:28.397763 sshd[6510]: Accepted publickey for core from 10.0.0.1 port 43668 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:37:28.401493 sshd-session[6510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:37:28.421784 systemd-logind[1562]: New session 32 of user core. Apr 16 02:37:28.431047 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 16 02:37:28.881373 sshd[6513]: Connection closed by 10.0.0.1 port 43668 Apr 16 02:37:28.882509 sshd-session[6510]: pam_unix(sshd:session): session closed for user core Apr 16 02:37:28.891109 systemd[1]: sshd@31-10.0.0.48:22-10.0.0.1:43668.service: Deactivated successfully. Apr 16 02:37:28.895804 systemd[1]: session-32.scope: Deactivated successfully. Apr 16 02:37:28.898895 systemd-logind[1562]: Session 32 logged out. Waiting for processes to exit. Apr 16 02:37:28.901448 systemd-logind[1562]: Removed session 32. Apr 16 02:37:33.906029 systemd[1]: Started sshd@32-10.0.0.48:22-10.0.0.1:43674.service - OpenSSH per-connection server daemon (10.0.0.1:43674). Apr 16 02:37:34.065543 sshd[6555]: Accepted publickey for core from 10.0.0.1 port 43674 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:37:34.068647 sshd-session[6555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:37:34.086953 systemd-logind[1562]: New session 33 of user core. Apr 16 02:37:34.101189 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 16 02:37:34.491737 sshd[6558]: Connection closed by 10.0.0.1 port 43674 Apr 16 02:37:34.492374 sshd-session[6555]: pam_unix(sshd:session): session closed for user core Apr 16 02:37:34.500973 systemd[1]: sshd@32-10.0.0.48:22-10.0.0.1:43674.service: Deactivated successfully. Apr 16 02:37:34.506156 systemd[1]: session-33.scope: Deactivated successfully. Apr 16 02:37:34.510524 systemd-logind[1562]: Session 33 logged out. Waiting for processes to exit. Apr 16 02:37:34.514772 systemd-logind[1562]: Removed session 33. Apr 16 02:37:36.678471 kubelet[2735]: E0416 02:37:36.677896 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:37:39.512849 systemd[1]: Started sshd@33-10.0.0.48:22-10.0.0.1:57996.service - OpenSSH per-connection server daemon (10.0.0.1:57996). Apr 16 02:37:39.648095 kubelet[2735]: E0416 02:37:39.648013 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:37:39.676502 sshd[6597]: Accepted publickey for core from 10.0.0.1 port 57996 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:37:39.679305 sshd-session[6597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:37:39.691914 systemd-logind[1562]: New session 34 of user core. Apr 16 02:37:39.702062 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 16 02:37:40.120913 sshd[6600]: Connection closed by 10.0.0.1 port 57996 Apr 16 02:37:40.121972 sshd-session[6597]: pam_unix(sshd:session): session closed for user core Apr 16 02:37:40.133087 systemd[1]: sshd@33-10.0.0.48:22-10.0.0.1:57996.service: Deactivated successfully. Apr 16 02:37:40.138821 systemd[1]: session-34.scope: Deactivated successfully. Apr 16 02:37:40.143447 systemd-logind[1562]: Session 34 logged out. Waiting for processes to exit. Apr 16 02:37:40.150510 systemd-logind[1562]: Removed session 34. Apr 16 02:37:45.211979 systemd[1]: Started sshd@34-10.0.0.48:22-10.0.0.1:57998.service - OpenSSH per-connection server daemon (10.0.0.1:57998). Apr 16 02:37:45.330913 sshd[6636]: Accepted publickey for core from 10.0.0.1 port 57998 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:37:45.336079 sshd-session[6636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:37:45.412403 systemd-logind[1562]: New session 35 of user core. Apr 16 02:37:45.424310 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 16 02:37:45.772090 sshd[6639]: Connection closed by 10.0.0.1 port 57998 Apr 16 02:37:45.774790 sshd-session[6636]: pam_unix(sshd:session): session closed for user core Apr 16 02:37:45.815989 systemd[1]: sshd@34-10.0.0.48:22-10.0.0.1:57998.service: Deactivated successfully. Apr 16 02:37:45.829142 systemd[1]: session-35.scope: Deactivated successfully. Apr 16 02:37:45.832066 systemd-logind[1562]: Session 35 logged out. Waiting for processes to exit. Apr 16 02:37:45.835961 systemd-logind[1562]: Removed session 35. Apr 16 02:37:50.807340 systemd[1]: Started sshd@35-10.0.0.48:22-10.0.0.1:51718.service - OpenSSH per-connection server daemon (10.0.0.1:51718). Apr 16 02:37:50.988680 sshd[6678]: Accepted publickey for core from 10.0.0.1 port 51718 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:37:50.991404 sshd-session[6678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:37:51.006661 systemd-logind[1562]: New session 36 of user core. Apr 16 02:37:51.019840 systemd[1]: Started session-36.scope - Session 36 of User core. Apr 16 02:37:51.385671 sshd[6681]: Connection closed by 10.0.0.1 port 51718 Apr 16 02:37:51.387095 sshd-session[6678]: pam_unix(sshd:session): session closed for user core Apr 16 02:37:51.396064 systemd[1]: sshd@35-10.0.0.48:22-10.0.0.1:51718.service: Deactivated successfully. Apr 16 02:37:51.400044 systemd[1]: session-36.scope: Deactivated successfully. Apr 16 02:37:51.403192 systemd-logind[1562]: Session 36 logged out. Waiting for processes to exit. Apr 16 02:37:51.410655 systemd-logind[1562]: Removed session 36. Apr 16 02:37:56.408691 systemd[1]: Started sshd@36-10.0.0.48:22-10.0.0.1:50128.service - OpenSSH per-connection server daemon (10.0.0.1:50128). Apr 16 02:37:56.597944 sshd[6716]: Accepted publickey for core from 10.0.0.1 port 50128 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:37:56.603041 sshd-session[6716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:37:56.636659 systemd-logind[1562]: New session 37 of user core. Apr 16 02:37:56.672567 systemd[1]: Started session-37.scope - Session 37 of User core. Apr 16 02:37:57.026309 sshd[6719]: Connection closed by 10.0.0.1 port 50128 Apr 16 02:37:57.027543 sshd-session[6716]: pam_unix(sshd:session): session closed for user core Apr 16 02:37:57.045965 systemd[1]: sshd@36-10.0.0.48:22-10.0.0.1:50128.service: Deactivated successfully. Apr 16 02:37:57.075986 systemd[1]: session-37.scope: Deactivated successfully. Apr 16 02:37:57.078748 systemd-logind[1562]: Session 37 logged out. Waiting for processes to exit. Apr 16 02:37:57.082474 systemd-logind[1562]: Removed session 37. Apr 16 02:38:02.080266 systemd[1]: Started sshd@37-10.0.0.48:22-10.0.0.1:50134.service - OpenSSH per-connection server daemon (10.0.0.1:50134). Apr 16 02:38:02.234395 sshd[6733]: Accepted publickey for core from 10.0.0.1 port 50134 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:38:02.239050 sshd-session[6733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:38:02.252385 systemd-logind[1562]: New session 38 of user core. Apr 16 02:38:02.266704 systemd[1]: Started session-38.scope - Session 38 of User core. Apr 16 02:38:02.592515 sshd[6736]: Connection closed by 10.0.0.1 port 50134 Apr 16 02:38:02.594147 sshd-session[6733]: pam_unix(sshd:session): session closed for user core Apr 16 02:38:02.605987 systemd[1]: sshd@37-10.0.0.48:22-10.0.0.1:50134.service: Deactivated successfully. Apr 16 02:38:02.609962 systemd[1]: session-38.scope: Deactivated successfully. Apr 16 02:38:02.614739 systemd-logind[1562]: Session 38 logged out. Waiting for processes to exit. Apr 16 02:38:02.617962 systemd-logind[1562]: Removed session 38. Apr 16 02:38:06.705631 kubelet[2735]: E0416 02:38:06.701883 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:38:07.634987 systemd[1]: Started sshd@38-10.0.0.48:22-10.0.0.1:37408.service - OpenSSH per-connection server daemon (10.0.0.1:37408). Apr 16 02:38:07.815495 sshd[6752]: Accepted publickey for core from 10.0.0.1 port 37408 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:38:07.821824 sshd-session[6752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:38:07.836270 systemd-logind[1562]: New session 39 of user core. Apr 16 02:38:07.844765 systemd[1]: Started session-39.scope - Session 39 of User core. Apr 16 02:38:08.242710 sshd[6755]: Connection closed by 10.0.0.1 port 37408 Apr 16 02:38:08.243431 sshd-session[6752]: pam_unix(sshd:session): session closed for user core Apr 16 02:38:08.302657 systemd[1]: sshd@38-10.0.0.48:22-10.0.0.1:37408.service: Deactivated successfully. Apr 16 02:38:08.311896 systemd[1]: session-39.scope: Deactivated successfully. Apr 16 02:38:08.314456 systemd-logind[1562]: Session 39 logged out. Waiting for processes to exit. Apr 16 02:38:08.320144 systemd[1]: Started sshd@39-10.0.0.48:22-10.0.0.1:37418.service - OpenSSH per-connection server daemon (10.0.0.1:37418). Apr 16 02:38:08.322385 systemd-logind[1562]: Removed session 39. Apr 16 02:38:08.548730 sshd[6768]: Accepted publickey for core from 10.0.0.1 port 37418 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:38:08.553792 sshd-session[6768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:38:08.576257 systemd-logind[1562]: New session 40 of user core. Apr 16 02:38:08.586838 systemd[1]: Started session-40.scope - Session 40 of User core. Apr 16 02:38:09.647272 kubelet[2735]: E0416 02:38:09.647158 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:38:09.676114 sshd[6771]: Connection closed by 10.0.0.1 port 37418 Apr 16 02:38:09.681255 sshd-session[6768]: pam_unix(sshd:session): session closed for user core Apr 16 02:38:09.695934 systemd[1]: sshd@39-10.0.0.48:22-10.0.0.1:37418.service: Deactivated successfully. Apr 16 02:38:09.700863 systemd[1]: session-40.scope: Deactivated successfully. Apr 16 02:38:09.704019 systemd-logind[1562]: Session 40 logged out. Waiting for processes to exit. Apr 16 02:38:09.708150 systemd[1]: Started sshd@40-10.0.0.48:22-10.0.0.1:37422.service - OpenSSH per-connection server daemon (10.0.0.1:37422). Apr 16 02:38:09.713037 systemd-logind[1562]: Removed session 40. Apr 16 02:38:09.942967 sshd[6806]: Accepted publickey for core from 10.0.0.1 port 37422 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:38:10.006396 sshd-session[6806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:38:10.031193 systemd-logind[1562]: New session 41 of user core. Apr 16 02:38:10.043316 systemd[1]: Started session-41.scope - Session 41 of User core. Apr 16 02:38:12.198670 sshd[6809]: Connection closed by 10.0.0.1 port 37422 Apr 16 02:38:12.196867 sshd-session[6806]: pam_unix(sshd:session): session closed for user core Apr 16 02:38:12.221524 systemd[1]: sshd@40-10.0.0.48:22-10.0.0.1:37422.service: Deactivated successfully. Apr 16 02:38:12.228664 systemd[1]: session-41.scope: Deactivated successfully. Apr 16 02:38:12.229191 systemd[1]: session-41.scope: Consumed 1.217s CPU time, 41.2M memory peak. Apr 16 02:38:12.235977 systemd-logind[1562]: Session 41 logged out. Waiting for processes to exit. Apr 16 02:38:12.246352 systemd[1]: Started sshd@41-10.0.0.48:22-10.0.0.1:37438.service - OpenSSH per-connection server daemon (10.0.0.1:37438). Apr 16 02:38:12.247361 systemd-logind[1562]: Removed session 41. Apr 16 02:38:12.426857 sshd[6856]: Accepted publickey for core from 10.0.0.1 port 37438 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:38:12.431078 sshd-session[6856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:38:12.484893 systemd-logind[1562]: New session 42 of user core. Apr 16 02:38:12.512539 systemd[1]: Started session-42.scope - Session 42 of User core. Apr 16 02:38:14.272770 sshd[6859]: Connection closed by 10.0.0.1 port 37438 Apr 16 02:38:14.276769 sshd-session[6856]: pam_unix(sshd:session): session closed for user core Apr 16 02:38:14.295914 systemd[1]: sshd@41-10.0.0.48:22-10.0.0.1:37438.service: Deactivated successfully. Apr 16 02:38:14.301541 systemd[1]: session-42.scope: Deactivated successfully. Apr 16 02:38:14.302525 systemd[1]: session-42.scope: Consumed 1.058s CPU time, 33.6M memory peak. Apr 16 02:38:14.307941 systemd-logind[1562]: Session 42 logged out. Waiting for processes to exit. Apr 16 02:38:14.316211 systemd[1]: Started sshd@42-10.0.0.48:22-10.0.0.1:37448.service - OpenSSH per-connection server daemon (10.0.0.1:37448). Apr 16 02:38:14.320150 systemd-logind[1562]: Removed session 42. Apr 16 02:38:14.504047 sshd[6873]: Accepted publickey for core from 10.0.0.1 port 37448 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:38:14.512701 sshd-session[6873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:38:14.599158 systemd-logind[1562]: New session 43 of user core. Apr 16 02:38:14.610394 systemd[1]: Started session-43.scope - Session 43 of User core. Apr 16 02:38:15.078154 sshd[6876]: Connection closed by 10.0.0.1 port 37448 Apr 16 02:38:15.078959 sshd-session[6873]: pam_unix(sshd:session): session closed for user core Apr 16 02:38:15.089933 systemd[1]: sshd@42-10.0.0.48:22-10.0.0.1:37448.service: Deactivated successfully. Apr 16 02:38:15.098250 systemd[1]: session-43.scope: Deactivated successfully. Apr 16 02:38:15.100988 systemd-logind[1562]: Session 43 logged out. Waiting for processes to exit. Apr 16 02:38:15.107362 systemd-logind[1562]: Removed session 43. Apr 16 02:38:20.100893 systemd[1]: Started sshd@43-10.0.0.48:22-10.0.0.1:58560.service - OpenSSH per-connection server daemon (10.0.0.1:58560). Apr 16 02:38:20.324374 sshd[6942]: Accepted publickey for core from 10.0.0.1 port 58560 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:38:20.329019 sshd-session[6942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:38:20.346410 systemd-logind[1562]: New session 44 of user core. Apr 16 02:38:20.371882 systemd[1]: Started session-44.scope - Session 44 of User core. Apr 16 02:38:20.835092 sshd[6945]: Connection closed by 10.0.0.1 port 58560 Apr 16 02:38:20.836336 sshd-session[6942]: pam_unix(sshd:session): session closed for user core Apr 16 02:38:20.847334 systemd[1]: sshd@43-10.0.0.48:22-10.0.0.1:58560.service: Deactivated successfully. Apr 16 02:38:20.854911 systemd[1]: session-44.scope: Deactivated successfully. Apr 16 02:38:20.874663 systemd-logind[1562]: Session 44 logged out. Waiting for processes to exit. Apr 16 02:38:20.886939 systemd-logind[1562]: Removed session 44. Apr 16 02:38:21.554524 containerd[1580]: time="2026-04-16T02:38:21.540799624Z" level=warning msg="container event discarded" container=7cb12f3068ba95d05f81127d1faf71527757adedc05ebde276c74dae75e74022 type=CONTAINER_CREATED_EVENT Apr 16 02:38:21.554524 containerd[1580]: time="2026-04-16T02:38:21.554512939Z" level=warning msg="container event discarded" container=7cb12f3068ba95d05f81127d1faf71527757adedc05ebde276c74dae75e74022 type=CONTAINER_STARTED_EVENT Apr 16 02:38:21.638193 containerd[1580]: time="2026-04-16T02:38:21.637396958Z" level=warning msg="container event discarded" container=14bc628c6ad9aede0f8bde3cbab1d8485d69499553f5bcef5540a899140a2dc1 type=CONTAINER_CREATED_EVENT Apr 16 02:38:21.638193 containerd[1580]: time="2026-04-16T02:38:21.637696589Z" level=warning msg="container event discarded" container=14bc628c6ad9aede0f8bde3cbab1d8485d69499553f5bcef5540a899140a2dc1 type=CONTAINER_STARTED_EVENT Apr 16 02:38:21.638193 containerd[1580]: time="2026-04-16T02:38:21.637748523Z" level=warning msg="container event discarded" container=b0272492d42a4de0ccc13c4e98fe2ea551967cc3afa89d31d6d06f0ae92dab53 type=CONTAINER_CREATED_EVENT Apr 16 02:38:21.638193 containerd[1580]: time="2026-04-16T02:38:21.637807263Z" level=warning msg="container event discarded" container=b0272492d42a4de0ccc13c4e98fe2ea551967cc3afa89d31d6d06f0ae92dab53 type=CONTAINER_STARTED_EVENT Apr 16 02:38:21.638193 containerd[1580]: time="2026-04-16T02:38:21.637841685Z" level=warning msg="container event discarded" container=bf3531d91edcc0773eddacb9861012bb63475583ca5a638e3a3e1743fe288175 type=CONTAINER_CREATED_EVENT Apr 16 02:38:21.638193 containerd[1580]: time="2026-04-16T02:38:21.637855459Z" level=warning msg="container event discarded" container=2a8dc7ecde3546ad821ccff6e1471b31085557a46fb515e53b29b1a3d6f1ef83 type=CONTAINER_CREATED_EVENT Apr 16 02:38:21.638193 containerd[1580]: time="2026-04-16T02:38:21.637865967Z" level=warning msg="container event discarded" container=54fb6fa2f74a0b02b5ea8d668f97048cff22faae48eeee483ee2c29073640557 type=CONTAINER_CREATED_EVENT Apr 16 02:38:21.638193 containerd[1580]: time="2026-04-16T02:38:21.637879551Z" level=warning msg="container event discarded" container=bf3531d91edcc0773eddacb9861012bb63475583ca5a638e3a3e1743fe288175 type=CONTAINER_STARTED_EVENT Apr 16 02:38:21.638193 containerd[1580]: time="2026-04-16T02:38:21.637886769Z" level=warning msg="container event discarded" container=54fb6fa2f74a0b02b5ea8d668f97048cff22faae48eeee483ee2c29073640557 type=CONTAINER_STARTED_EVENT Apr 16 02:38:21.638193 containerd[1580]: time="2026-04-16T02:38:21.637893149Z" level=warning msg="container event discarded" container=2a8dc7ecde3546ad821ccff6e1471b31085557a46fb515e53b29b1a3d6f1ef83 type=CONTAINER_STARTED_EVENT Apr 16 02:38:22.704911 kubelet[2735]: E0416 02:38:22.704053 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:38:25.903370 systemd[1]: Started sshd@44-10.0.0.48:22-10.0.0.1:50848.service - OpenSSH per-connection server daemon (10.0.0.1:50848). Apr 16 02:38:26.030174 sshd[6961]: Accepted publickey for core from 10.0.0.1 port 50848 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:38:26.032424 sshd-session[6961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:38:26.048722 systemd-logind[1562]: New session 45 of user core. Apr 16 02:38:26.063345 systemd[1]: Started session-45.scope - Session 45 of User core. Apr 16 02:38:26.446122 sshd[6964]: Connection closed by 10.0.0.1 port 50848 Apr 16 02:38:26.446827 sshd-session[6961]: pam_unix(sshd:session): session closed for user core Apr 16 02:38:26.529547 systemd[1]: sshd@44-10.0.0.48:22-10.0.0.1:50848.service: Deactivated successfully. Apr 16 02:38:26.533949 systemd[1]: session-45.scope: Deactivated successfully. Apr 16 02:38:26.539890 systemd-logind[1562]: Session 45 logged out. Waiting for processes to exit. Apr 16 02:38:26.551288 systemd-logind[1562]: Removed session 45. Apr 16 02:38:27.660153 kubelet[2735]: E0416 02:38:27.657779 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:38:31.562512 systemd[1]: Started sshd@45-10.0.0.48:22-10.0.0.1:50858.service - OpenSSH per-connection server daemon (10.0.0.1:50858). Apr 16 02:38:31.683036 sshd[6978]: Accepted publickey for core from 10.0.0.1 port 50858 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:38:31.687591 sshd-session[6978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:38:31.734126 systemd-logind[1562]: New session 46 of user core. Apr 16 02:38:31.744949 systemd[1]: Started session-46.scope - Session 46 of User core. Apr 16 02:38:32.225306 sshd[6981]: Connection closed by 10.0.0.1 port 50858 Apr 16 02:38:32.228043 sshd-session[6978]: pam_unix(sshd:session): session closed for user core Apr 16 02:38:32.235427 systemd[1]: sshd@45-10.0.0.48:22-10.0.0.1:50858.service: Deactivated successfully. Apr 16 02:38:32.247771 systemd[1]: session-46.scope: Deactivated successfully. Apr 16 02:38:32.251524 systemd-logind[1562]: Session 46 logged out. Waiting for processes to exit. Apr 16 02:38:32.270366 systemd-logind[1562]: Removed session 46. Apr 16 02:38:33.708205 containerd[1580]: time="2026-04-16T02:38:33.707752305Z" level=warning msg="container event discarded" container=482be240d2962b2f0476edb8ac98312b8d3195c93ee9bd489bf3e0429053ce21 type=CONTAINER_CREATED_EVENT Apr 16 02:38:33.709658 containerd[1580]: time="2026-04-16T02:38:33.709377422Z" level=warning msg="container event discarded" container=482be240d2962b2f0476edb8ac98312b8d3195c93ee9bd489bf3e0429053ce21 type=CONTAINER_STARTED_EVENT Apr 16 02:38:33.774993 containerd[1580]: time="2026-04-16T02:38:33.774849187Z" level=warning msg="container event discarded" container=32e3fdb98bacc1916bd06adf4a93f24c113a6cc080fec552a7b6e2668098ad4b type=CONTAINER_CREATED_EVENT Apr 16 02:38:33.943416 containerd[1580]: time="2026-04-16T02:38:33.943277219Z" level=warning msg="container event discarded" container=32e3fdb98bacc1916bd06adf4a93f24c113a6cc080fec552a7b6e2668098ad4b type=CONTAINER_STARTED_EVENT Apr 16 02:38:34.240630 containerd[1580]: time="2026-04-16T02:38:34.239865354Z" level=warning msg="container event discarded" container=66b03c1daee5c81ff15a78580da09b1e2dce31f49eb06308f239f8314a4d6fc3 type=CONTAINER_CREATED_EVENT Apr 16 02:38:34.240630 containerd[1580]: time="2026-04-16T02:38:34.240466800Z" level=warning msg="container event discarded" container=66b03c1daee5c81ff15a78580da09b1e2dce31f49eb06308f239f8314a4d6fc3 type=CONTAINER_STARTED_EVENT Apr 16 02:38:36.668191 containerd[1580]: time="2026-04-16T02:38:36.667437234Z" level=warning msg="container event discarded" container=b1bec2dd68828e3a6244c91731321fc51e78fcfdbbbb51a97d698a1d00e780f4 type=CONTAINER_CREATED_EVENT Apr 16 02:38:36.739685 containerd[1580]: time="2026-04-16T02:38:36.739444278Z" level=warning msg="container event discarded" container=b1bec2dd68828e3a6244c91731321fc51e78fcfdbbbb51a97d698a1d00e780f4 type=CONTAINER_STARTED_EVENT Apr 16 02:38:37.245788 systemd[1]: Started sshd@46-10.0.0.48:22-10.0.0.1:57510.service - OpenSSH per-connection server daemon (10.0.0.1:57510). Apr 16 02:38:37.434812 sshd[6997]: Accepted publickey for core from 10.0.0.1 port 57510 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:38:37.441301 sshd-session[6997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:38:37.458162 systemd-logind[1562]: New session 47 of user core. Apr 16 02:38:37.471976 systemd[1]: Started session-47.scope - Session 47 of User core. Apr 16 02:38:37.920005 sshd[7000]: Connection closed by 10.0.0.1 port 57510 Apr 16 02:38:37.922464 sshd-session[6997]: pam_unix(sshd:session): session closed for user core Apr 16 02:38:37.935355 systemd[1]: sshd@46-10.0.0.48:22-10.0.0.1:57510.service: Deactivated successfully. Apr 16 02:38:38.001067 systemd[1]: session-47.scope: Deactivated successfully. Apr 16 02:38:38.008030 systemd-logind[1562]: Session 47 logged out. Waiting for processes to exit. Apr 16 02:38:38.012161 systemd-logind[1562]: Removed session 47. Apr 16 02:38:42.944296 systemd[1]: Started sshd@47-10.0.0.48:22-10.0.0.1:57526.service - OpenSSH per-connection server daemon (10.0.0.1:57526). Apr 16 02:38:43.148587 sshd[7059]: Accepted publickey for core from 10.0.0.1 port 57526 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:38:43.152025 sshd-session[7059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:38:43.168478 systemd-logind[1562]: New session 48 of user core. Apr 16 02:38:43.175713 systemd[1]: Started session-48.scope - Session 48 of User core. Apr 16 02:38:43.608670 sshd[7062]: Connection closed by 10.0.0.1 port 57526 Apr 16 02:38:43.613891 sshd-session[7059]: pam_unix(sshd:session): session closed for user core Apr 16 02:38:43.643326 systemd[1]: sshd@47-10.0.0.48:22-10.0.0.1:57526.service: Deactivated successfully. Apr 16 02:38:43.707086 systemd[1]: session-48.scope: Deactivated successfully. Apr 16 02:38:43.717999 systemd-logind[1562]: Session 48 logged out. Waiting for processes to exit. Apr 16 02:38:43.728670 systemd-logind[1562]: Removed session 48. Apr 16 02:38:47.681754 kubelet[2735]: E0416 02:38:47.681306 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:38:48.585708 containerd[1580]: time="2026-04-16T02:38:48.584743395Z" level=warning msg="container event discarded" container=cfe2c4110f3effb123267128f910c0022b7793b9fdf5d4f4f7a8805b79151eb6 type=CONTAINER_CREATED_EVENT Apr 16 02:38:48.585708 containerd[1580]: time="2026-04-16T02:38:48.584873625Z" level=warning msg="container event discarded" container=cfe2c4110f3effb123267128f910c0022b7793b9fdf5d4f4f7a8805b79151eb6 type=CONTAINER_STARTED_EVENT Apr 16 02:38:48.602952 containerd[1580]: time="2026-04-16T02:38:48.602805631Z" level=warning msg="container event discarded" container=a6128e3d2520e93de3eafd62fd7fa55a673452a25854503bded20821ef31e3f0 type=CONTAINER_CREATED_EVENT Apr 16 02:38:48.602952 containerd[1580]: time="2026-04-16T02:38:48.602884382Z" level=warning msg="container event discarded" container=a6128e3d2520e93de3eafd62fd7fa55a673452a25854503bded20821ef31e3f0 type=CONTAINER_STARTED_EVENT Apr 16 02:38:48.639964 systemd[1]: Started sshd@48-10.0.0.48:22-10.0.0.1:34394.service - OpenSSH per-connection server daemon (10.0.0.1:34394). Apr 16 02:38:48.785811 sshd[7102]: Accepted publickey for core from 10.0.0.1 port 34394 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:38:48.793189 sshd-session[7102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:38:48.823921 systemd-logind[1562]: New session 49 of user core. Apr 16 02:38:48.836920 systemd[1]: Started session-49.scope - Session 49 of User core. Apr 16 02:38:49.208721 sshd[7105]: Connection closed by 10.0.0.1 port 34394 Apr 16 02:38:49.210066 sshd-session[7102]: pam_unix(sshd:session): session closed for user core Apr 16 02:38:49.217173 systemd[1]: sshd@48-10.0.0.48:22-10.0.0.1:34394.service: Deactivated successfully. Apr 16 02:38:49.224298 systemd[1]: session-49.scope: Deactivated successfully. Apr 16 02:38:49.226125 systemd-logind[1562]: Session 49 logged out. Waiting for processes to exit. Apr 16 02:38:49.232385 systemd-logind[1562]: Removed session 49. Apr 16 02:38:50.277388 containerd[1580]: time="2026-04-16T02:38:50.277207936Z" level=warning msg="container event discarded" container=b7c1ed76e66564e03b67bd03b84d5073085db529922d81c2a6212be55df3259d type=CONTAINER_CREATED_EVENT Apr 16 02:38:50.409612 containerd[1580]: time="2026-04-16T02:38:50.408448404Z" level=warning msg="container event discarded" container=b7c1ed76e66564e03b67bd03b84d5073085db529922d81c2a6212be55df3259d type=CONTAINER_STARTED_EVENT Apr 16 02:38:50.601652 containerd[1580]: time="2026-04-16T02:38:50.601323578Z" level=warning msg="container event discarded" container=b7c1ed76e66564e03b67bd03b84d5073085db529922d81c2a6212be55df3259d type=CONTAINER_STOPPED_EVENT Apr 16 02:38:52.867623 update_engine[1566]: I20260416 02:38:52.865865 1566 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 16 02:38:52.867623 update_engine[1566]: I20260416 02:38:52.866964 1566 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 16 02:38:52.872027 update_engine[1566]: I20260416 02:38:52.871773 1566 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 16 02:38:52.872983 update_engine[1566]: I20260416 02:38:52.872359 1566 omaha_request_params.cc:62] Current group set to stable Apr 16 02:38:52.874332 update_engine[1566]: I20260416 02:38:52.873174 1566 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 16 02:38:52.874332 update_engine[1566]: I20260416 02:38:52.873202 1566 update_attempter.cc:643] Scheduling an action processor start. Apr 16 02:38:52.874332 update_engine[1566]: I20260416 02:38:52.873256 1566 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 16 02:38:52.874332 update_engine[1566]: I20260416 02:38:52.873314 1566 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 16 02:38:52.874332 update_engine[1566]: I20260416 02:38:52.873368 1566 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 16 02:38:52.874332 update_engine[1566]: I20260416 02:38:52.873373 1566 omaha_request_action.cc:272] Request: Apr 16 02:38:52.874332 update_engine[1566]: Apr 16 02:38:52.874332 update_engine[1566]: Apr 16 02:38:52.874332 update_engine[1566]: Apr 16 02:38:52.874332 update_engine[1566]: Apr 16 02:38:52.874332 update_engine[1566]: Apr 16 02:38:52.874332 update_engine[1566]: Apr 16 02:38:52.874332 update_engine[1566]: Apr 16 02:38:52.874332 update_engine[1566]: Apr 16 02:38:52.874332 update_engine[1566]: I20260416 02:38:52.873379 1566 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 02:38:52.882819 update_engine[1566]: I20260416 02:38:52.881907 1566 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 02:38:52.884091 update_engine[1566]: I20260416 02:38:52.883874 1566 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 02:38:52.892661 locksmithd[1612]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 16 02:38:52.894136 update_engine[1566]: E20260416 02:38:52.893993 1566 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 02:38:52.894385 update_engine[1566]: I20260416 02:38:52.894341 1566 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 16 02:38:53.246792 containerd[1580]: time="2026-04-16T02:38:53.246302403Z" level=warning msg="container event discarded" container=b6bef2e5e91cd140ea34557b116ead0f377ac22e85314c27111c915723fa9f35 type=CONTAINER_CREATED_EVENT Apr 16 02:38:53.489442 containerd[1580]: time="2026-04-16T02:38:53.489334688Z" level=warning msg="container event discarded" container=b6bef2e5e91cd140ea34557b116ead0f377ac22e85314c27111c915723fa9f35 type=CONTAINER_STARTED_EVENT Apr 16 02:38:53.667391 kubelet[2735]: E0416 02:38:53.665622 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:38:54.243462 systemd[1]: Started sshd@49-10.0.0.48:22-10.0.0.1:34398.service - OpenSSH per-connection server daemon (10.0.0.1:34398). Apr 16 02:38:54.371114 sshd[7142]: Accepted publickey for core from 10.0.0.1 port 34398 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:38:54.372731 sshd-session[7142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:38:54.384895 systemd-logind[1562]: New session 50 of user core. Apr 16 02:38:54.394935 systemd[1]: Started session-50.scope - Session 50 of User core. Apr 16 02:38:54.722945 sshd[7145]: Connection closed by 10.0.0.1 port 34398 Apr 16 02:38:54.723437 sshd-session[7142]: pam_unix(sshd:session): session closed for user core Apr 16 02:38:54.734106 systemd[1]: sshd@49-10.0.0.48:22-10.0.0.1:34398.service: Deactivated successfully. Apr 16 02:38:54.739069 systemd[1]: session-50.scope: Deactivated successfully. Apr 16 02:38:54.746106 systemd-logind[1562]: Session 50 logged out. Waiting for processes to exit. Apr 16 02:38:54.749456 systemd-logind[1562]: Removed session 50. Apr 16 02:38:59.747052 systemd[1]: Started sshd@50-10.0.0.48:22-10.0.0.1:37822.service - OpenSSH per-connection server daemon (10.0.0.1:37822). Apr 16 02:38:59.947364 sshd[7163]: Accepted publickey for core from 10.0.0.1 port 37822 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:38:59.979175 sshd-session[7163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:38:59.993087 systemd-logind[1562]: New session 51 of user core. Apr 16 02:39:00.000934 systemd[1]: Started session-51.scope - Session 51 of User core. Apr 16 02:39:00.377515 sshd[7166]: Connection closed by 10.0.0.1 port 37822 Apr 16 02:39:00.375058 sshd-session[7163]: pam_unix(sshd:session): session closed for user core Apr 16 02:39:00.382586 systemd[1]: sshd@50-10.0.0.48:22-10.0.0.1:37822.service: Deactivated successfully. Apr 16 02:39:00.386646 systemd[1]: session-51.scope: Deactivated successfully. Apr 16 02:39:00.396024 systemd-logind[1562]: Session 51 logged out. Waiting for processes to exit. Apr 16 02:39:00.404173 systemd-logind[1562]: Removed session 51. Apr 16 02:39:00.648896 kubelet[2735]: E0416 02:39:00.648486 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:39:02.795050 update_engine[1566]: I20260416 02:39:02.794647 1566 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 02:39:02.795598 update_engine[1566]: I20260416 02:39:02.795121 1566 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 02:39:02.795754 update_engine[1566]: I20260416 02:39:02.795704 1566 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 02:39:02.805263 update_engine[1566]: E20260416 02:39:02.804800 1566 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 02:39:02.805263 update_engine[1566]: I20260416 02:39:02.805042 1566 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 16 02:39:05.416807 systemd[1]: Started sshd@51-10.0.0.48:22-10.0.0.1:56598.service - OpenSSH per-connection server daemon (10.0.0.1:56598). Apr 16 02:39:05.578619 sshd[7182]: Accepted publickey for core from 10.0.0.1 port 56598 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:39:05.582791 sshd-session[7182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:39:05.604975 systemd[1]: Started session-52.scope - Session 52 of User core. Apr 16 02:39:05.607172 systemd-logind[1562]: New session 52 of user core. Apr 16 02:39:05.938551 sshd[7197]: Connection closed by 10.0.0.1 port 56598 Apr 16 02:39:05.939080 sshd-session[7182]: pam_unix(sshd:session): session closed for user core Apr 16 02:39:05.951783 systemd[1]: sshd@51-10.0.0.48:22-10.0.0.1:56598.service: Deactivated successfully. Apr 16 02:39:05.960063 systemd[1]: session-52.scope: Deactivated successfully. Apr 16 02:39:05.968340 systemd-logind[1562]: Session 52 logged out. Waiting for processes to exit. Apr 16 02:39:05.974634 systemd-logind[1562]: Removed session 52. Apr 16 02:39:07.622143 containerd[1580]: time="2026-04-16T02:39:07.621930529Z" level=warning msg="container event discarded" container=dc43f5a9e96efd95e4ed06401af66fbe6dcd73e42923a966f416d708c0120049 type=CONTAINER_CREATED_EVENT Apr 16 02:39:07.822280 containerd[1580]: time="2026-04-16T02:39:07.822133733Z" level=warning msg="container event discarded" container=dc43f5a9e96efd95e4ed06401af66fbe6dcd73e42923a966f416d708c0120049 type=CONTAINER_STARTED_EVENT Apr 16 02:39:08.051753 containerd[1580]: time="2026-04-16T02:39:08.051630862Z" level=warning msg="container event discarded" container=dc43f5a9e96efd95e4ed06401af66fbe6dcd73e42923a966f416d708c0120049 type=CONTAINER_STOPPED_EVENT Apr 16 02:39:08.647405 kubelet[2735]: E0416 02:39:08.647277 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:39:10.981867 systemd[1]: Started sshd@52-10.0.0.48:22-10.0.0.1:56610.service - OpenSSH per-connection server daemon (10.0.0.1:56610). Apr 16 02:39:11.144140 sshd[7277]: Accepted publickey for core from 10.0.0.1 port 56610 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:39:11.184862 sshd-session[7277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:39:11.209330 systemd-logind[1562]: New session 53 of user core. Apr 16 02:39:11.226044 systemd[1]: Started session-53.scope - Session 53 of User core. Apr 16 02:39:11.736771 sshd[7280]: Connection closed by 10.0.0.1 port 56610 Apr 16 02:39:11.740299 sshd-session[7277]: pam_unix(sshd:session): session closed for user core Apr 16 02:39:11.777712 systemd[1]: sshd@52-10.0.0.48:22-10.0.0.1:56610.service: Deactivated successfully. Apr 16 02:39:11.784715 systemd[1]: session-53.scope: Deactivated successfully. Apr 16 02:39:11.795930 systemd-logind[1562]: Session 53 logged out. Waiting for processes to exit. Apr 16 02:39:11.810125 systemd-logind[1562]: Removed session 53. Apr 16 02:39:12.783754 update_engine[1566]: I20260416 02:39:12.781474 1566 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 02:39:12.783754 update_engine[1566]: I20260416 02:39:12.781673 1566 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 02:39:12.783754 update_engine[1566]: I20260416 02:39:12.782341 1566 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 02:39:12.793321 update_engine[1566]: E20260416 02:39:12.793068 1566 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 02:39:12.793697 update_engine[1566]: I20260416 02:39:12.793454 1566 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 16 02:39:13.681131 containerd[1580]: time="2026-04-16T02:39:13.680975090Z" level=warning msg="container event discarded" container=9fe83b306a8881650464c2109a7b9876e89aae0af7d0c9f339d3a762e174e274 type=CONTAINER_CREATED_EVENT Apr 16 02:39:14.007434 containerd[1580]: time="2026-04-16T02:39:13.947887854Z" level=warning msg="container event discarded" container=9fe83b306a8881650464c2109a7b9876e89aae0af7d0c9f339d3a762e174e274 type=CONTAINER_STARTED_EVENT Apr 16 02:39:15.102631 containerd[1580]: time="2026-04-16T02:39:15.102286759Z" level=warning msg="container event discarded" container=9fe83b306a8881650464c2109a7b9876e89aae0af7d0c9f339d3a762e174e274 type=CONTAINER_STOPPED_EVENT Apr 16 02:39:15.531370 containerd[1580]: time="2026-04-16T02:39:15.531125074Z" level=warning msg="container event discarded" container=ff7b8cf29b0b5015a6c7bb6c5e62c230c4d70ffa77823dcdeae6ce73800a47c5 type=CONTAINER_CREATED_EVENT Apr 16 02:39:15.766436 containerd[1580]: time="2026-04-16T02:39:15.765670786Z" level=warning msg="container event discarded" container=ff7b8cf29b0b5015a6c7bb6c5e62c230c4d70ffa77823dcdeae6ce73800a47c5 type=CONTAINER_STARTED_EVENT Apr 16 02:39:16.784402 systemd[1]: Started sshd@53-10.0.0.48:22-10.0.0.1:51370.service - OpenSSH per-connection server daemon (10.0.0.1:51370). Apr 16 02:39:17.033381 sshd[7293]: Accepted publickey for core from 10.0.0.1 port 51370 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:39:17.096693 sshd-session[7293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:39:17.120483 systemd-logind[1562]: New session 54 of user core. Apr 16 02:39:17.131805 systemd[1]: Started session-54.scope - Session 54 of User core. Apr 16 02:39:17.713268 sshd[7308]: Connection closed by 10.0.0.1 port 51370 Apr 16 02:39:17.715175 sshd-session[7293]: pam_unix(sshd:session): session closed for user core Apr 16 02:39:17.723846 systemd[1]: sshd@53-10.0.0.48:22-10.0.0.1:51370.service: Deactivated successfully. Apr 16 02:39:17.731086 systemd[1]: session-54.scope: Deactivated successfully. Apr 16 02:39:17.786848 systemd-logind[1562]: Session 54 logged out. Waiting for processes to exit. Apr 16 02:39:17.797398 systemd-logind[1562]: Removed session 54. Apr 16 02:39:18.728350 containerd[1580]: time="2026-04-16T02:39:18.727882583Z" level=warning msg="container event discarded" container=3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5 type=CONTAINER_CREATED_EVENT Apr 16 02:39:18.729788 containerd[1580]: time="2026-04-16T02:39:18.729056208Z" level=warning msg="container event discarded" container=3c47db3f6ce7a45d9f640ce21aee7b039bd5a8b351b88a58243bdacb589bbfa5 type=CONTAINER_STARTED_EVENT Apr 16 02:39:21.272596 containerd[1580]: time="2026-04-16T02:39:21.272263102Z" level=warning msg="container event discarded" container=1978e3ff2806e5f9f2864710e18803466759b86ffc90ae42065b4da01a853a2e type=CONTAINER_CREATED_EVENT Apr 16 02:39:21.538448 containerd[1580]: time="2026-04-16T02:39:21.536507039Z" level=warning msg="container event discarded" container=1978e3ff2806e5f9f2864710e18803466759b86ffc90ae42065b4da01a853a2e type=CONTAINER_STARTED_EVENT Apr 16 02:39:22.771687 update_engine[1566]: I20260416 02:39:22.771538 1566 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 02:39:22.772299 update_engine[1566]: I20260416 02:39:22.771790 1566 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 02:39:22.772299 update_engine[1566]: I20260416 02:39:22.772282 1566 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 02:39:22.779742 update_engine[1566]: E20260416 02:39:22.779448 1566 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 02:39:22.780669 update_engine[1566]: I20260416 02:39:22.780599 1566 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 16 02:39:22.780669 update_engine[1566]: I20260416 02:39:22.780649 1566 omaha_request_action.cc:617] Omaha request response: Apr 16 02:39:22.780850 update_engine[1566]: E20260416 02:39:22.780766 1566 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 16 02:39:22.780850 update_engine[1566]: I20260416 02:39:22.780818 1566 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 16 02:39:22.780850 update_engine[1566]: I20260416 02:39:22.780823 1566 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 02:39:22.780850 update_engine[1566]: I20260416 02:39:22.780828 1566 update_attempter.cc:306] Processing Done. Apr 16 02:39:22.780850 update_engine[1566]: E20260416 02:39:22.780846 1566 update_attempter.cc:619] Update failed. Apr 16 02:39:22.780999 update_engine[1566]: I20260416 02:39:22.780851 1566 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 16 02:39:22.780999 update_engine[1566]: I20260416 02:39:22.780857 1566 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 16 02:39:22.780999 update_engine[1566]: I20260416 02:39:22.780862 1566 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 16 02:39:22.780999 update_engine[1566]: I20260416 02:39:22.780946 1566 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 16 02:39:22.780999 update_engine[1566]: I20260416 02:39:22.780972 1566 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 16 02:39:22.780999 update_engine[1566]: I20260416 02:39:22.780977 1566 omaha_request_action.cc:272] Request: Apr 16 02:39:22.780999 update_engine[1566]: Apr 16 02:39:22.780999 update_engine[1566]: Apr 16 02:39:22.780999 update_engine[1566]: Apr 16 02:39:22.780999 update_engine[1566]: Apr 16 02:39:22.780999 update_engine[1566]: Apr 16 02:39:22.780999 update_engine[1566]: Apr 16 02:39:22.780999 update_engine[1566]: I20260416 02:39:22.780982 1566 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 02:39:22.781625 update_engine[1566]: I20260416 02:39:22.781008 1566 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 02:39:22.781625 update_engine[1566]: I20260416 02:39:22.781599 1566 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 02:39:22.782607 locksmithd[1612]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 16 02:39:22.784321 systemd[1]: Started sshd@54-10.0.0.48:22-10.0.0.1:51386.service - OpenSSH per-connection server daemon (10.0.0.1:51386). Apr 16 02:39:22.789009 update_engine[1566]: E20260416 02:39:22.788916 1566 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 02:39:22.789128 update_engine[1566]: I20260416 02:39:22.789041 1566 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 16 02:39:22.789128 update_engine[1566]: I20260416 02:39:22.789067 1566 omaha_request_action.cc:617] Omaha request response: Apr 16 02:39:22.789128 update_engine[1566]: I20260416 02:39:22.789079 1566 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 02:39:22.789128 update_engine[1566]: I20260416 02:39:22.789083 1566 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 02:39:22.789128 update_engine[1566]: I20260416 02:39:22.789088 1566 update_attempter.cc:306] Processing Done. Apr 16 02:39:22.789128 update_engine[1566]: I20260416 02:39:22.789098 1566 update_attempter.cc:310] Error event sent. Apr 16 02:39:22.789128 update_engine[1566]: I20260416 02:39:22.789108 1566 update_check_scheduler.cc:74] Next update check in 43m7s Apr 16 02:39:22.790982 locksmithd[1612]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 16 02:39:22.957706 sshd[7360]: Accepted publickey for core from 10.0.0.1 port 51386 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:39:22.960180 sshd-session[7360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:39:22.982858 systemd-logind[1562]: New session 55 of user core. Apr 16 02:39:22.995705 systemd[1]: Started session-55.scope - Session 55 of User core. Apr 16 02:39:23.386789 sshd[7363]: Connection closed by 10.0.0.1 port 51386 Apr 16 02:39:23.387583 sshd-session[7360]: pam_unix(sshd:session): session closed for user core Apr 16 02:39:23.413340 systemd[1]: sshd@54-10.0.0.48:22-10.0.0.1:51386.service: Deactivated successfully. Apr 16 02:39:23.418847 systemd[1]: session-55.scope: Deactivated successfully. Apr 16 02:39:23.423843 systemd-logind[1562]: Session 55 logged out. Waiting for processes to exit. Apr 16 02:39:23.428036 systemd-logind[1562]: Removed session 55. Apr 16 02:39:24.523258 containerd[1580]: time="2026-04-16T02:39:24.522976195Z" level=warning msg="container event discarded" container=b1bafd3e593e7d90bc21ae552eefd2345f80cada770969a2594c43e7f4c94303 type=CONTAINER_CREATED_EVENT Apr 16 02:39:24.709185 containerd[1580]: time="2026-04-16T02:39:24.709074925Z" level=warning msg="container event discarded" container=b1bafd3e593e7d90bc21ae552eefd2345f80cada770969a2594c43e7f4c94303 type=CONTAINER_STARTED_EVENT Apr 16 02:39:24.709455 kubelet[2735]: E0416 02:39:24.709395 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:39:27.424479 containerd[1580]: time="2026-04-16T02:39:27.424344950Z" level=warning msg="container event discarded" container=ec4938f5666a6d1a6a47de69b6c00709ad72dea48e6f5040741ffda5ebf580bc type=CONTAINER_CREATED_EVENT Apr 16 02:39:27.424479 containerd[1580]: time="2026-04-16T02:39:27.424438650Z" level=warning msg="container event discarded" container=ec4938f5666a6d1a6a47de69b6c00709ad72dea48e6f5040741ffda5ebf580bc type=CONTAINER_STARTED_EVENT Apr 16 02:39:27.620155 containerd[1580]: time="2026-04-16T02:39:27.619765591Z" level=warning msg="container event discarded" container=a8f6b1e774105b38ad0630b0c320d352a0816ce5b5c01e41e12486970a66bc0e type=CONTAINER_CREATED_EVENT Apr 16 02:39:27.620155 containerd[1580]: time="2026-04-16T02:39:27.619935239Z" level=warning msg="container event discarded" container=a8f6b1e774105b38ad0630b0c320d352a0816ce5b5c01e41e12486970a66bc0e type=CONTAINER_STARTED_EVENT Apr 16 02:39:28.426104 systemd[1]: Started sshd@55-10.0.0.48:22-10.0.0.1:40966.service - OpenSSH per-connection server daemon (10.0.0.1:40966). Apr 16 02:39:28.557653 sshd[7378]: Accepted publickey for core from 10.0.0.1 port 40966 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:39:28.561118 sshd-session[7378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:39:28.573843 systemd-logind[1562]: New session 56 of user core. Apr 16 02:39:28.596070 systemd[1]: Started session-56.scope - Session 56 of User core. Apr 16 02:39:28.639902 containerd[1580]: time="2026-04-16T02:39:28.639679329Z" level=warning msg="container event discarded" container=baae11d2ca7869516dc9757b6b1a8ea9f41d19b093e51669bcfdc491467881e9 type=CONTAINER_CREATED_EVENT Apr 16 02:39:28.639902 containerd[1580]: time="2026-04-16T02:39:28.639882905Z" level=warning msg="container event discarded" container=baae11d2ca7869516dc9757b6b1a8ea9f41d19b093e51669bcfdc491467881e9 type=CONTAINER_STARTED_EVENT Apr 16 02:39:28.890181 containerd[1580]: time="2026-04-16T02:39:28.889878275Z" level=warning msg="container event discarded" container=e904de0db84af736e11f0cd198a9b9d9d712b80217245bf09d7c40e46fc54545 type=CONTAINER_CREATED_EVENT Apr 16 02:39:28.922766 sshd[7381]: Connection closed by 10.0.0.1 port 40966 Apr 16 02:39:28.921669 sshd-session[7378]: pam_unix(sshd:session): session closed for user core Apr 16 02:39:28.992056 containerd[1580]: time="2026-04-16T02:39:28.991590329Z" level=warning msg="container event discarded" container=a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78 type=CONTAINER_CREATED_EVENT Apr 16 02:39:28.992056 containerd[1580]: time="2026-04-16T02:39:28.991653232Z" level=warning msg="container event discarded" container=a934afdb3b4808a9361bde02be797c828c8adb855b5780f6547ef6a9ca24db78 type=CONTAINER_STARTED_EVENT Apr 16 02:39:28.989800 systemd[1]: sshd@55-10.0.0.48:22-10.0.0.1:40966.service: Deactivated successfully. Apr 16 02:39:28.993044 systemd[1]: session-56.scope: Deactivated successfully. Apr 16 02:39:28.996343 systemd-logind[1562]: Session 56 logged out. Waiting for processes to exit. Apr 16 02:39:29.000941 systemd-logind[1562]: Removed session 56. Apr 16 02:39:29.097052 containerd[1580]: time="2026-04-16T02:39:29.096730428Z" level=warning msg="container event discarded" container=e904de0db84af736e11f0cd198a9b9d9d712b80217245bf09d7c40e46fc54545 type=CONTAINER_STARTED_EVENT Apr 16 02:39:30.028245 containerd[1580]: time="2026-04-16T02:39:30.028139114Z" level=warning msg="container event discarded" container=e568a928abb9cc71a4e5e0f2bfb444e6b163c1ef314fd0544ac350bcf300aa15 type=CONTAINER_CREATED_EVENT Apr 16 02:39:30.028245 containerd[1580]: time="2026-04-16T02:39:30.028192633Z" level=warning msg="container event discarded" container=e568a928abb9cc71a4e5e0f2bfb444e6b163c1ef314fd0544ac350bcf300aa15 type=CONTAINER_STARTED_EVENT Apr 16 02:39:30.625440 containerd[1580]: time="2026-04-16T02:39:30.625027536Z" level=warning msg="container event discarded" container=623ec65805fadb8eedbc557d65e18e07cfdb6da0332d5367b8cc74b956dc828c type=CONTAINER_CREATED_EVENT Apr 16 02:39:30.625440 containerd[1580]: time="2026-04-16T02:39:30.625138305Z" level=warning msg="container event discarded" container=623ec65805fadb8eedbc557d65e18e07cfdb6da0332d5367b8cc74b956dc828c type=CONTAINER_STARTED_EVENT Apr 16 02:39:30.666025 containerd[1580]: time="2026-04-16T02:39:30.665752910Z" level=warning msg="container event discarded" container=c69b554c77ec5e7e1765a5da5cc7c019998b4c636b235a47554d91adb5954975 type=CONTAINER_CREATED_EVENT Apr 16 02:39:30.863664 containerd[1580]: time="2026-04-16T02:39:30.863435354Z" level=warning msg="container event discarded" container=c69b554c77ec5e7e1765a5da5cc7c019998b4c636b235a47554d91adb5954975 type=CONTAINER_STARTED_EVENT Apr 16 02:39:31.452769 containerd[1580]: time="2026-04-16T02:39:31.452472460Z" level=warning msg="container event discarded" container=b81c1890f8b4a3b70cb18ca834fb766b2a784cd554635a672204837ccc6871a2 type=CONTAINER_CREATED_EVENT Apr 16 02:39:31.452769 containerd[1580]: time="2026-04-16T02:39:31.452748426Z" level=warning msg="container event discarded" container=b81c1890f8b4a3b70cb18ca834fb766b2a784cd554635a672204837ccc6871a2 type=CONTAINER_STARTED_EVENT Apr 16 02:39:32.024297 containerd[1580]: time="2026-04-16T02:39:32.022916069Z" level=warning msg="container event discarded" container=3761d42f62688589ccf9ca9bda3d77ed7826b0723b9f1d124f4f9227ec626b0c type=CONTAINER_CREATED_EVENT Apr 16 02:39:32.302816 containerd[1580]: time="2026-04-16T02:39:32.301665009Z" level=warning msg="container event discarded" container=3761d42f62688589ccf9ca9bda3d77ed7826b0723b9f1d124f4f9227ec626b0c type=CONTAINER_STARTED_EVENT Apr 16 02:39:32.531143 containerd[1580]: time="2026-04-16T02:39:32.530886335Z" level=warning msg="container event discarded" container=8e1221bd905b97a38e498261d0ba351ea1fa8f05ea8bbb358fc7e14bc335e630 type=CONTAINER_CREATED_EVENT Apr 16 02:39:32.665511 containerd[1580]: time="2026-04-16T02:39:32.646984903Z" level=warning msg="container event discarded" container=8e1221bd905b97a38e498261d0ba351ea1fa8f05ea8bbb358fc7e14bc335e630 type=CONTAINER_STARTED_EVENT Apr 16 02:39:34.016174 systemd[1]: Started sshd@56-10.0.0.48:22-10.0.0.1:40972.service - OpenSSH per-connection server daemon (10.0.0.1:40972). Apr 16 02:39:34.103905 sshd[7395]: Accepted publickey for core from 10.0.0.1 port 40972 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:39:34.106943 sshd-session[7395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:39:34.146419 systemd-logind[1562]: New session 57 of user core. Apr 16 02:39:34.212998 systemd[1]: Started session-57.scope - Session 57 of User core. Apr 16 02:39:34.427822 containerd[1580]: time="2026-04-16T02:39:34.427043983Z" level=warning msg="container event discarded" container=cc2738f0bc915b4358d426ec1853ae495c9877394330aee6ce644f9791381885 type=CONTAINER_CREATED_EVENT Apr 16 02:39:34.472175 sshd[7398]: Connection closed by 10.0.0.1 port 40972 Apr 16 02:39:34.472967 sshd-session[7395]: pam_unix(sshd:session): session closed for user core Apr 16 02:39:34.479977 systemd[1]: sshd@56-10.0.0.48:22-10.0.0.1:40972.service: Deactivated successfully. Apr 16 02:39:34.483033 systemd[1]: session-57.scope: Deactivated successfully. Apr 16 02:39:34.484470 systemd-logind[1562]: Session 57 logged out. Waiting for processes to exit. Apr 16 02:39:34.487501 systemd-logind[1562]: Removed session 57. Apr 16 02:39:34.666696 containerd[1580]: time="2026-04-16T02:39:34.666317937Z" level=warning msg="container event discarded" container=cc2738f0bc915b4358d426ec1853ae495c9877394330aee6ce644f9791381885 type=CONTAINER_STARTED_EVENT Apr 16 02:39:37.281699 containerd[1580]: time="2026-04-16T02:39:37.281582686Z" level=warning msg="container event discarded" container=331441659b7b40c116126267b4b6fb88a76f85ffa961b7a61a24e9317d6d0b2b type=CONTAINER_CREATED_EVENT Apr 16 02:39:37.468085 containerd[1580]: time="2026-04-16T02:39:37.465353204Z" level=warning msg="container event discarded" container=331441659b7b40c116126267b4b6fb88a76f85ffa961b7a61a24e9317d6d0b2b type=CONTAINER_STARTED_EVENT Apr 16 02:39:39.508536 systemd[1]: Started sshd@57-10.0.0.48:22-10.0.0.1:34046.service - OpenSSH per-connection server daemon (10.0.0.1:34046). Apr 16 02:39:39.622301 sshd[7439]: Accepted publickey for core from 10.0.0.1 port 34046 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:39:39.626983 sshd-session[7439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:39:39.710369 systemd-logind[1562]: New session 58 of user core. Apr 16 02:39:39.730876 systemd[1]: Started session-58.scope - Session 58 of User core. Apr 16 02:39:39.847904 containerd[1580]: time="2026-04-16T02:39:39.847563061Z" level=warning msg="container event discarded" container=12a65f40b40c5315e55729edd4a9c5fe5af76259665f0ebe4a0a4283eb6dffd7 type=CONTAINER_CREATED_EVENT Apr 16 02:39:39.970746 containerd[1580]: time="2026-04-16T02:39:39.970593141Z" level=warning msg="container event discarded" container=12a65f40b40c5315e55729edd4a9c5fe5af76259665f0ebe4a0a4283eb6dffd7 type=CONTAINER_STARTED_EVENT Apr 16 02:39:40.008902 sshd[7442]: Connection closed by 10.0.0.1 port 34046 Apr 16 02:39:40.010774 sshd-session[7439]: pam_unix(sshd:session): session closed for user core Apr 16 02:39:40.020601 systemd[1]: sshd@57-10.0.0.48:22-10.0.0.1:34046.service: Deactivated successfully. Apr 16 02:39:40.031040 systemd[1]: session-58.scope: Deactivated successfully. Apr 16 02:39:40.033252 systemd-logind[1562]: Session 58 logged out. Waiting for processes to exit. Apr 16 02:39:40.040396 systemd-logind[1562]: Removed session 58. Apr 16 02:39:42.258657 containerd[1580]: time="2026-04-16T02:39:42.257868213Z" level=warning msg="container event discarded" container=3b42178cd83a3ce2c55de1753460f2d2ea9125e3bafc3be078cd86f160f6a42e type=CONTAINER_CREATED_EVENT Apr 16 02:39:42.398469 containerd[1580]: time="2026-04-16T02:39:42.398262429Z" level=warning msg="container event discarded" container=3b42178cd83a3ce2c55de1753460f2d2ea9125e3bafc3be078cd86f160f6a42e type=CONTAINER_STARTED_EVENT Apr 16 02:39:42.648010 kubelet[2735]: E0416 02:39:42.647284 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:39:45.042500 systemd[1]: Started sshd@58-10.0.0.48:22-10.0.0.1:34052.service - OpenSSH per-connection server daemon (10.0.0.1:34052). Apr 16 02:39:45.189034 sshd[7489]: Accepted publickey for core from 10.0.0.1 port 34052 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:39:45.192010 sshd-session[7489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:39:45.210036 systemd-logind[1562]: New session 59 of user core. Apr 16 02:39:45.218300 systemd[1]: Started session-59.scope - Session 59 of User core. Apr 16 02:39:45.506288 sshd[7492]: Connection closed by 10.0.0.1 port 34052 Apr 16 02:39:45.506778 sshd-session[7489]: pam_unix(sshd:session): session closed for user core Apr 16 02:39:45.513912 systemd[1]: sshd@58-10.0.0.48:22-10.0.0.1:34052.service: Deactivated successfully. Apr 16 02:39:45.518193 systemd[1]: session-59.scope: Deactivated successfully. Apr 16 02:39:45.522656 systemd-logind[1562]: Session 59 logged out. Waiting for processes to exit. Apr 16 02:39:45.525133 systemd-logind[1562]: Removed session 59. Apr 16 02:39:50.534990 systemd[1]: Started sshd@59-10.0.0.48:22-10.0.0.1:41024.service - OpenSSH per-connection server daemon (10.0.0.1:41024). Apr 16 02:39:50.629381 sshd[7531]: Accepted publickey for core from 10.0.0.1 port 41024 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:39:50.632719 sshd-session[7531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:39:50.649965 systemd-logind[1562]: New session 60 of user core. Apr 16 02:39:50.659907 systemd[1]: Started session-60.scope - Session 60 of User core. Apr 16 02:39:50.974840 sshd[7534]: Connection closed by 10.0.0.1 port 41024 Apr 16 02:39:50.976415 sshd-session[7531]: pam_unix(sshd:session): session closed for user core Apr 16 02:39:50.985774 systemd[1]: sshd@59-10.0.0.48:22-10.0.0.1:41024.service: Deactivated successfully. Apr 16 02:39:50.990895 systemd[1]: session-60.scope: Deactivated successfully. Apr 16 02:39:50.995282 systemd-logind[1562]: Session 60 logged out. Waiting for processes to exit. Apr 16 02:39:51.002359 systemd-logind[1562]: Removed session 60. Apr 16 02:39:51.648512 kubelet[2735]: E0416 02:39:51.648140 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:39:56.047708 systemd[1]: Started sshd@60-10.0.0.48:22-10.0.0.1:60100.service - OpenSSH per-connection server daemon (10.0.0.1:60100). Apr 16 02:39:56.237055 sshd[7575]: Accepted publickey for core from 10.0.0.1 port 60100 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:39:56.244678 sshd-session[7575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:39:56.268950 systemd-logind[1562]: New session 61 of user core. Apr 16 02:39:56.279782 systemd[1]: Started session-61.scope - Session 61 of User core. Apr 16 02:39:56.642719 sshd[7578]: Connection closed by 10.0.0.1 port 60100 Apr 16 02:39:56.643350 sshd-session[7575]: pam_unix(sshd:session): session closed for user core Apr 16 02:39:56.652743 systemd[1]: sshd@60-10.0.0.48:22-10.0.0.1:60100.service: Deactivated successfully. Apr 16 02:39:56.656876 systemd[1]: session-61.scope: Deactivated successfully. Apr 16 02:39:56.662864 systemd-logind[1562]: Session 61 logged out. Waiting for processes to exit. Apr 16 02:39:56.689475 systemd-logind[1562]: Removed session 61. Apr 16 02:40:01.695941 systemd[1]: Started sshd@61-10.0.0.48:22-10.0.0.1:60116.service - OpenSSH per-connection server daemon (10.0.0.1:60116). Apr 16 02:40:01.832907 sshd[7591]: Accepted publickey for core from 10.0.0.1 port 60116 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:40:01.837169 sshd-session[7591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:40:01.926289 systemd-logind[1562]: New session 62 of user core. Apr 16 02:40:01.931592 systemd[1]: Started session-62.scope - Session 62 of User core. Apr 16 02:40:02.201052 sshd[7594]: Connection closed by 10.0.0.1 port 60116 Apr 16 02:40:02.200859 sshd-session[7591]: pam_unix(sshd:session): session closed for user core Apr 16 02:40:02.213004 systemd[1]: sshd@61-10.0.0.48:22-10.0.0.1:60116.service: Deactivated successfully. Apr 16 02:40:02.216074 systemd[1]: session-62.scope: Deactivated successfully. Apr 16 02:40:02.222799 systemd-logind[1562]: Session 62 logged out. Waiting for processes to exit. Apr 16 02:40:02.225353 systemd-logind[1562]: Removed session 62. Apr 16 02:40:07.218187 systemd[1]: Started sshd@62-10.0.0.48:22-10.0.0.1:34148.service - OpenSSH per-connection server daemon (10.0.0.1:34148). Apr 16 02:40:07.383455 sshd[7613]: Accepted publickey for core from 10.0.0.1 port 34148 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:40:07.386830 sshd-session[7613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:40:07.405050 systemd-logind[1562]: New session 63 of user core. Apr 16 02:40:07.415649 systemd[1]: Started session-63.scope - Session 63 of User core. Apr 16 02:40:07.650592 kubelet[2735]: E0416 02:40:07.650086 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:40:07.710713 sshd[7616]: Connection closed by 10.0.0.1 port 34148 Apr 16 02:40:07.711535 sshd-session[7613]: pam_unix(sshd:session): session closed for user core Apr 16 02:40:07.718570 systemd[1]: sshd@62-10.0.0.48:22-10.0.0.1:34148.service: Deactivated successfully. Apr 16 02:40:07.723511 systemd[1]: session-63.scope: Deactivated successfully. Apr 16 02:40:07.727131 systemd-logind[1562]: Session 63 logged out. Waiting for processes to exit. Apr 16 02:40:07.731106 systemd-logind[1562]: Removed session 63. Apr 16 02:40:10.646798 kubelet[2735]: E0416 02:40:10.646553 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:40:12.736163 systemd[1]: Started sshd@63-10.0.0.48:22-10.0.0.1:34150.service - OpenSSH per-connection server daemon (10.0.0.1:34150). Apr 16 02:40:12.897468 sshd[7676]: Accepted publickey for core from 10.0.0.1 port 34150 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:40:12.900685 sshd-session[7676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:40:12.914145 systemd-logind[1562]: New session 64 of user core. Apr 16 02:40:12.923824 systemd[1]: Started session-64.scope - Session 64 of User core. Apr 16 02:40:13.229655 sshd[7681]: Connection closed by 10.0.0.1 port 34150 Apr 16 02:40:13.229991 sshd-session[7676]: pam_unix(sshd:session): session closed for user core Apr 16 02:40:13.236037 systemd[1]: sshd@63-10.0.0.48:22-10.0.0.1:34150.service: Deactivated successfully. Apr 16 02:40:13.242529 systemd[1]: session-64.scope: Deactivated successfully. Apr 16 02:40:13.247508 systemd-logind[1562]: Session 64 logged out. Waiting for processes to exit. Apr 16 02:40:13.279303 systemd-logind[1562]: Removed session 64. Apr 16 02:40:13.651771 kubelet[2735]: E0416 02:40:13.649207 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:40:15.648562 kubelet[2735]: E0416 02:40:15.646986 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:40:18.263528 systemd[1]: Started sshd@64-10.0.0.48:22-10.0.0.1:56878.service - OpenSSH per-connection server daemon (10.0.0.1:56878). Apr 16 02:40:18.502266 sshd[7718]: Accepted publickey for core from 10.0.0.1 port 56878 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:40:18.506507 sshd-session[7718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:40:18.548427 systemd-logind[1562]: New session 65 of user core. Apr 16 02:40:18.629078 systemd[1]: Started session-65.scope - Session 65 of User core. Apr 16 02:40:19.223068 sshd[7744]: Connection closed by 10.0.0.1 port 56878 Apr 16 02:40:19.225113 sshd-session[7718]: pam_unix(sshd:session): session closed for user core Apr 16 02:40:19.234802 systemd[1]: sshd@64-10.0.0.48:22-10.0.0.1:56878.service: Deactivated successfully. Apr 16 02:40:19.239461 systemd[1]: session-65.scope: Deactivated successfully. Apr 16 02:40:19.243892 systemd-logind[1562]: Session 65 logged out. Waiting for processes to exit. Apr 16 02:40:19.246663 systemd-logind[1562]: Removed session 65. Apr 16 02:40:24.240553 systemd[1]: Started sshd@65-10.0.0.48:22-10.0.0.1:56894.service - OpenSSH per-connection server daemon (10.0.0.1:56894). Apr 16 02:40:24.348731 sshd[7757]: Accepted publickey for core from 10.0.0.1 port 56894 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:40:24.363136 sshd-session[7757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:40:24.391959 systemd-logind[1562]: New session 66 of user core. Apr 16 02:40:24.400047 systemd[1]: Started session-66.scope - Session 66 of User core. Apr 16 02:40:24.699556 sshd[7760]: Connection closed by 10.0.0.1 port 56894 Apr 16 02:40:24.700126 sshd-session[7757]: pam_unix(sshd:session): session closed for user core Apr 16 02:40:24.706948 systemd[1]: sshd@65-10.0.0.48:22-10.0.0.1:56894.service: Deactivated successfully. Apr 16 02:40:24.712720 systemd[1]: session-66.scope: Deactivated successfully. Apr 16 02:40:24.715741 systemd-logind[1562]: Session 66 logged out. Waiting for processes to exit. Apr 16 02:40:24.717731 systemd-logind[1562]: Removed session 66. Apr 16 02:40:29.757017 systemd[1]: Started sshd@66-10.0.0.48:22-10.0.0.1:36598.service - OpenSSH per-connection server daemon (10.0.0.1:36598). Apr 16 02:40:29.900640 sshd[7779]: Accepted publickey for core from 10.0.0.1 port 36598 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:40:29.903742 sshd-session[7779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:40:29.925187 systemd-logind[1562]: New session 67 of user core. Apr 16 02:40:29.936139 systemd[1]: Started session-67.scope - Session 67 of User core. Apr 16 02:40:30.286302 sshd[7782]: Connection closed by 10.0.0.1 port 36598 Apr 16 02:40:30.286371 sshd-session[7779]: pam_unix(sshd:session): session closed for user core Apr 16 02:40:30.294801 systemd[1]: sshd@66-10.0.0.48:22-10.0.0.1:36598.service: Deactivated successfully. Apr 16 02:40:30.298346 systemd[1]: session-67.scope: Deactivated successfully. Apr 16 02:40:30.299972 systemd-logind[1562]: Session 67 logged out. Waiting for processes to exit. Apr 16 02:40:30.302293 systemd-logind[1562]: Removed session 67. Apr 16 02:40:32.663965 kubelet[2735]: E0416 02:40:32.663622 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:40:35.320754 systemd[1]: Started sshd@67-10.0.0.48:22-10.0.0.1:36508.service - OpenSSH per-connection server daemon (10.0.0.1:36508). Apr 16 02:40:35.449912 sshd[7797]: Accepted publickey for core from 10.0.0.1 port 36508 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:40:35.452979 sshd-session[7797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:40:35.482184 systemd-logind[1562]: New session 68 of user core. Apr 16 02:40:35.489761 systemd[1]: Started session-68.scope - Session 68 of User core. Apr 16 02:40:35.795135 sshd[7800]: Connection closed by 10.0.0.1 port 36508 Apr 16 02:40:35.795812 sshd-session[7797]: pam_unix(sshd:session): session closed for user core Apr 16 02:40:35.808672 systemd[1]: sshd@67-10.0.0.48:22-10.0.0.1:36508.service: Deactivated successfully. Apr 16 02:40:35.812944 systemd[1]: session-68.scope: Deactivated successfully. Apr 16 02:40:35.815648 systemd-logind[1562]: Session 68 logged out. Waiting for processes to exit. Apr 16 02:40:35.823037 systemd-logind[1562]: Removed session 68. Apr 16 02:40:40.820606 systemd[1]: Started sshd@68-10.0.0.48:22-10.0.0.1:36524.service - OpenSSH per-connection server daemon (10.0.0.1:36524). Apr 16 02:40:40.960202 sshd[7874]: Accepted publickey for core from 10.0.0.1 port 36524 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:40:40.965074 sshd-session[7874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:40:40.991855 systemd-logind[1562]: New session 69 of user core. Apr 16 02:40:41.016980 systemd[1]: Started session-69.scope - Session 69 of User core. Apr 16 02:40:41.340445 sshd[7881]: Connection closed by 10.0.0.1 port 36524 Apr 16 02:40:41.341505 sshd-session[7874]: pam_unix(sshd:session): session closed for user core Apr 16 02:40:41.386290 systemd[1]: sshd@68-10.0.0.48:22-10.0.0.1:36524.service: Deactivated successfully. Apr 16 02:40:41.386555 systemd-logind[1562]: Session 69 logged out. Waiting for processes to exit. Apr 16 02:40:41.388202 systemd[1]: session-69.scope: Deactivated successfully. Apr 16 02:40:41.394982 systemd-logind[1562]: Removed session 69.