Apr 17 23:38:53.998993 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 17 22:11:20 -00 2026 Apr 17 23:38:53.999030 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:38:53.999044 kernel: BIOS-provided physical RAM map: Apr 17 23:38:53.999055 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Apr 17 23:38:53.999064 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Apr 17 23:38:53.999080 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 17 23:38:53.999092 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Apr 17 23:38:53.999102 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Apr 17 23:38:53.999112 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 17 23:38:53.999123 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 17 23:38:53.999133 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 17 23:38:53.999143 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 17 23:38:53.999153 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Apr 17 23:38:54.001201 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 17 23:38:54.001223 kernel: NX (Execute Disable) protection: active Apr 17 23:38:54.001235 kernel: APIC: Static calls initialized Apr 17 23:38:54.001246 kernel: SMBIOS 2.8 present. Apr 17 23:38:54.001257 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Apr 17 23:38:54.001268 kernel: Hypervisor detected: KVM Apr 17 23:38:54.001285 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 23:38:54.001295 kernel: kvm-clock: using sched offset of 5740523890 cycles Apr 17 23:38:54.001306 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 23:38:54.001317 kernel: tsc: Detected 2000.000 MHz processor Apr 17 23:38:54.001329 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 23:38:54.001341 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 23:38:54.001352 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Apr 17 23:38:54.001363 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 17 23:38:54.001375 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 23:38:54.001390 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Apr 17 23:38:54.001401 kernel: Using GB pages for direct mapping Apr 17 23:38:54.001413 kernel: ACPI: Early table checksum verification disabled Apr 17 23:38:54.001423 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Apr 17 23:38:54.001434 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:38:54.001445 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:38:54.001456 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:38:54.001467 kernel: ACPI: FACS 0x000000007FFE0000 000040 Apr 17 23:38:54.001478 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:38:54.001495 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:38:54.001507 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:38:54.001518 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:38:54.001537 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Apr 17 23:38:54.001549 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Apr 17 23:38:54.001561 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Apr 17 23:38:54.001578 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Apr 17 23:38:54.001590 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Apr 17 23:38:54.001602 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Apr 17 23:38:54.001614 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Apr 17 23:38:54.001626 kernel: No NUMA configuration found Apr 17 23:38:54.001638 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Apr 17 23:38:54.001650 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] Apr 17 23:38:54.001662 kernel: Zone ranges: Apr 17 23:38:54.001679 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 23:38:54.001691 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 17 23:38:54.001703 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Apr 17 23:38:54.001715 kernel: Movable zone start for each node Apr 17 23:38:54.001727 kernel: Early memory node ranges Apr 17 23:38:54.001740 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 17 23:38:54.001752 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Apr 17 23:38:54.001764 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Apr 17 23:38:54.001777 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Apr 17 23:38:54.001789 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:38:54.001805 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 17 23:38:54.001817 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Apr 17 23:38:54.001829 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 17 23:38:54.001841 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 23:38:54.001854 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 17 23:38:54.001866 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 17 23:38:54.001878 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 23:38:54.001889 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 23:38:54.001901 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 23:38:54.001918 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 23:38:54.001930 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 23:38:54.001942 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 17 23:38:54.001955 kernel: TSC deadline timer available Apr 17 23:38:54.001967 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 17 23:38:54.001979 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 17 23:38:54.001991 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 17 23:38:54.002002 kernel: kvm-guest: setup PV sched yield Apr 17 23:38:54.002014 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 17 23:38:54.002031 kernel: Booting paravirtualized kernel on KVM Apr 17 23:38:54.002043 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 23:38:54.002056 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 17 23:38:54.002068 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 17 23:38:54.002080 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 17 23:38:54.002092 kernel: pcpu-alloc: [0] 0 1 Apr 17 23:38:54.002104 kernel: kvm-guest: PV spinlocks enabled Apr 17 23:38:54.002116 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 23:38:54.002130 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:38:54.002147 kernel: random: crng init done Apr 17 23:38:54.002159 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 23:38:54.002231 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 23:38:54.002246 kernel: Fallback order for Node 0: 0 Apr 17 23:38:54.002259 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Apr 17 23:38:54.002271 kernel: Policy zone: Normal Apr 17 23:38:54.002284 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:38:54.002296 kernel: software IO TLB: area num 2. Apr 17 23:38:54.002314 kernel: Memory: 3966220K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 227292K reserved, 0K cma-reserved) Apr 17 23:38:54.002326 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 17 23:38:54.002338 kernel: ftrace: allocating 37996 entries in 149 pages Apr 17 23:38:54.002349 kernel: ftrace: allocated 149 pages with 4 groups Apr 17 23:38:54.002361 kernel: Dynamic Preempt: voluntary Apr 17 23:38:54.002373 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:38:54.002386 kernel: rcu: RCU event tracing is enabled. Apr 17 23:38:54.002398 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 17 23:38:54.002410 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:38:54.002427 kernel: Rude variant of Tasks RCU enabled. Apr 17 23:38:54.002439 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:38:54.002451 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:38:54.002463 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 17 23:38:54.002474 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 17 23:38:54.002486 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:38:54.002498 kernel: Console: colour VGA+ 80x25 Apr 17 23:38:54.002510 kernel: printk: console [tty0] enabled Apr 17 23:38:54.002522 kernel: printk: console [ttyS0] enabled Apr 17 23:38:54.002538 kernel: ACPI: Core revision 20230628 Apr 17 23:38:54.002551 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 17 23:38:54.002563 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 23:38:54.002575 kernel: x2apic enabled Apr 17 23:38:54.002601 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 23:38:54.002616 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 17 23:38:54.002629 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 17 23:38:54.002641 kernel: kvm-guest: setup PV IPIs Apr 17 23:38:54.002654 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 17 23:38:54.002666 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 17 23:38:54.002679 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Apr 17 23:38:54.002690 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 17 23:38:54.002707 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 17 23:38:54.002720 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 17 23:38:54.002732 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 23:38:54.002745 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 23:38:54.002757 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 23:38:54.002774 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 17 23:38:54.002787 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 17 23:38:54.002799 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 17 23:38:54.002812 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 17 23:38:54.002825 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 17 23:38:54.002838 kernel: active return thunk: srso_alias_return_thunk Apr 17 23:38:54.002851 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 17 23:38:54.002863 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 17 23:38:54.002880 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:38:54.002893 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 23:38:54.002906 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 23:38:54.002919 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 23:38:54.002931 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 17 23:38:54.002945 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 23:38:54.002958 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Apr 17 23:38:54.002971 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Apr 17 23:38:54.002984 kernel: Freeing SMP alternatives memory: 32K Apr 17 23:38:54.003002 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:38:54.003014 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:38:54.003026 kernel: landlock: Up and running. Apr 17 23:38:54.003038 kernel: SELinux: Initializing. Apr 17 23:38:54.003051 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:38:54.003062 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:38:54.003074 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Apr 17 23:38:54.003087 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:38:54.003100 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:38:54.003118 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:38:54.003130 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 17 23:38:54.003142 kernel: ... version: 0 Apr 17 23:38:54.003154 kernel: ... bit width: 48 Apr 17 23:38:54.003188 kernel: ... generic registers: 6 Apr 17 23:38:54.003203 kernel: ... value mask: 0000ffffffffffff Apr 17 23:38:54.003215 kernel: ... max period: 00007fffffffffff Apr 17 23:38:54.003227 kernel: ... fixed-purpose events: 0 Apr 17 23:38:54.003239 kernel: ... event mask: 000000000000003f Apr 17 23:38:54.003256 kernel: signal: max sigframe size: 3376 Apr 17 23:38:54.003269 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:38:54.003283 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:38:54.003295 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:38:54.003308 kernel: smpboot: x86: Booting SMP configuration: Apr 17 23:38:54.003321 kernel: .... node #0, CPUs: #1 Apr 17 23:38:54.003333 kernel: smp: Brought up 1 node, 2 CPUs Apr 17 23:38:54.003346 kernel: smpboot: Max logical packages: 1 Apr 17 23:38:54.003358 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Apr 17 23:38:54.003376 kernel: devtmpfs: initialized Apr 17 23:38:54.003389 kernel: x86/mm: Memory block size: 128MB Apr 17 23:38:54.003402 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:38:54.003415 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 17 23:38:54.003427 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:38:54.003440 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:38:54.003452 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:38:54.003464 kernel: audit: type=2000 audit(1776469133.258:1): state=initialized audit_enabled=0 res=1 Apr 17 23:38:54.003475 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:38:54.003493 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 23:38:54.003506 kernel: cpuidle: using governor menu Apr 17 23:38:54.003518 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:38:54.003531 kernel: dca service started, version 1.12.1 Apr 17 23:38:54.003543 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 17 23:38:54.003555 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 17 23:38:54.003568 kernel: PCI: Using configuration type 1 for base access Apr 17 23:38:54.003581 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 23:38:54.003593 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:38:54.003610 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:38:54.003623 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:38:54.003636 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:38:54.003648 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:38:54.003661 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:38:54.003674 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:38:54.003686 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 23:38:54.003699 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 17 23:38:54.003711 kernel: ACPI: Interpreter enabled Apr 17 23:38:54.003727 kernel: ACPI: PM: (supports S0 S3 S5) Apr 17 23:38:54.003740 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 23:38:54.003753 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 23:38:54.003765 kernel: PCI: Using E820 reservations for host bridge windows Apr 17 23:38:54.003778 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 17 23:38:54.003791 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 23:38:54.004081 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:38:54.006372 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 17 23:38:54.006595 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 17 23:38:54.006615 kernel: PCI host bridge to bus 0000:00 Apr 17 23:38:54.006824 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 23:38:54.007015 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 23:38:54.007232 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 23:38:54.007424 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Apr 17 23:38:54.007616 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 17 23:38:54.007815 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Apr 17 23:38:54.008003 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 23:38:54.010279 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 17 23:38:54.010519 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 17 23:38:54.010735 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 17 23:38:54.010945 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 17 23:38:54.011159 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 17 23:38:54.011403 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 17 23:38:54.011630 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Apr 17 23:38:54.011846 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Apr 17 23:38:54.012056 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 17 23:38:54.014317 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 17 23:38:54.014543 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 17 23:38:54.014758 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 17 23:38:54.014968 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 17 23:38:54.015207 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 17 23:38:54.015421 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 17 23:38:54.015646 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 17 23:38:54.015867 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 17 23:38:54.016090 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 17 23:38:54.020030 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Apr 17 23:38:54.020295 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Apr 17 23:38:54.020522 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 17 23:38:54.020730 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 17 23:38:54.020751 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 23:38:54.020765 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 23:38:54.020779 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 23:38:54.020800 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 23:38:54.020813 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 17 23:38:54.020825 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 17 23:38:54.020838 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 17 23:38:54.020850 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 17 23:38:54.020862 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 17 23:38:54.020876 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 17 23:38:54.020888 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 17 23:38:54.020901 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 17 23:38:54.020919 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 17 23:38:54.020932 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 17 23:38:54.020945 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 17 23:38:54.020957 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 17 23:38:54.020970 kernel: iommu: Default domain type: Translated Apr 17 23:38:54.020982 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 23:38:54.020994 kernel: PCI: Using ACPI for IRQ routing Apr 17 23:38:54.021007 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 23:38:54.021020 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Apr 17 23:38:54.021038 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Apr 17 23:38:54.022924 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 17 23:38:54.023139 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 17 23:38:54.023377 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 17 23:38:54.023398 kernel: vgaarb: loaded Apr 17 23:38:54.023412 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 17 23:38:54.023426 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 17 23:38:54.023438 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 23:38:54.023458 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:38:54.023471 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:38:54.023484 kernel: pnp: PnP ACPI init Apr 17 23:38:54.023706 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 17 23:38:54.023728 kernel: pnp: PnP ACPI: found 5 devices Apr 17 23:38:54.023742 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 23:38:54.023754 kernel: NET: Registered PF_INET protocol family Apr 17 23:38:54.023767 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 23:38:54.023785 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 17 23:38:54.023798 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:38:54.023811 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 23:38:54.023824 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 17 23:38:54.023837 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 17 23:38:54.023849 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:38:54.023862 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:38:54.023875 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:38:54.023888 kernel: NET: Registered PF_XDP protocol family Apr 17 23:38:54.024087 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 23:38:54.024331 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 23:38:54.024518 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 23:38:54.024708 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Apr 17 23:38:54.024900 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 17 23:38:54.025090 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Apr 17 23:38:54.025112 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:38:54.025126 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 17 23:38:54.025145 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Apr 17 23:38:54.025157 kernel: Initialise system trusted keyrings Apr 17 23:38:54.027209 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 17 23:38:54.027228 kernel: Key type asymmetric registered Apr 17 23:38:54.027241 kernel: Asymmetric key parser 'x509' registered Apr 17 23:38:54.027253 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 17 23:38:54.027266 kernel: io scheduler mq-deadline registered Apr 17 23:38:54.027278 kernel: io scheduler kyber registered Apr 17 23:38:54.027290 kernel: io scheduler bfq registered Apr 17 23:38:54.027302 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 23:38:54.027323 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 17 23:38:54.027336 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 17 23:38:54.027349 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:38:54.027362 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 23:38:54.027375 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 23:38:54.027388 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 23:38:54.027400 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 23:38:54.027413 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 17 23:38:54.027649 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 17 23:38:54.027863 kernel: rtc_cmos 00:03: registered as rtc0 Apr 17 23:38:54.028067 kernel: rtc_cmos 00:03: setting system clock to 2026-04-17T23:38:53 UTC (1776469133) Apr 17 23:38:54.029345 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 17 23:38:54.029368 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 17 23:38:54.029383 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:38:54.029395 kernel: Segment Routing with IPv6 Apr 17 23:38:54.029408 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:38:54.029427 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:38:54.029440 kernel: Key type dns_resolver registered Apr 17 23:38:54.029452 kernel: IPI shorthand broadcast: enabled Apr 17 23:38:54.029465 kernel: sched_clock: Marking stable (903002740, 330166200)->(1365113170, -131944230) Apr 17 23:38:54.029477 kernel: registered taskstats version 1 Apr 17 23:38:54.029490 kernel: Loading compiled-in X.509 certificates Apr 17 23:38:54.029502 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 39e9969c7f49062f0fc1d1fb72e8f874436eb94f' Apr 17 23:38:54.029515 kernel: Key type .fscrypt registered Apr 17 23:38:54.029527 kernel: Key type fscrypt-provisioning registered Apr 17 23:38:54.029546 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:38:54.029559 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:38:54.029572 kernel: ima: No architecture policies found Apr 17 23:38:54.029585 kernel: clk: Disabling unused clocks Apr 17 23:38:54.029597 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 17 23:38:54.029610 kernel: Write protecting the kernel read-only data: 36864k Apr 17 23:38:54.029622 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 17 23:38:54.029635 kernel: Run /init as init process Apr 17 23:38:54.029648 kernel: with arguments: Apr 17 23:38:54.029665 kernel: /init Apr 17 23:38:54.029678 kernel: with environment: Apr 17 23:38:54.029690 kernel: HOME=/ Apr 17 23:38:54.029702 kernel: TERM=linux Apr 17 23:38:54.029717 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:38:54.029734 systemd[1]: Detected virtualization kvm. Apr 17 23:38:54.029747 systemd[1]: Detected architecture x86-64. Apr 17 23:38:54.029760 systemd[1]: Running in initrd. Apr 17 23:38:54.029778 systemd[1]: No hostname configured, using default hostname. Apr 17 23:38:54.029791 systemd[1]: Hostname set to . Apr 17 23:38:54.029805 systemd[1]: Initializing machine ID from random generator. Apr 17 23:38:54.029818 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:38:54.029833 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:38:54.029873 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:38:54.029895 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:38:54.029909 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:38:54.029923 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:38:54.029937 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:38:54.029952 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:38:54.029965 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:38:54.029984 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:38:54.029998 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:38:54.030012 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:38:54.030026 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:38:54.030039 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:38:54.030053 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:38:54.030067 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:38:54.030081 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:38:54.030095 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:38:54.030116 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:38:54.030130 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:38:54.030145 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:38:54.030159 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:38:54.031222 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:38:54.031240 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:38:54.031254 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:38:54.031268 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:38:54.031282 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:38:54.031304 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:38:54.031317 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:38:54.031367 systemd-journald[178]: Collecting audit messages is disabled. Apr 17 23:38:54.031402 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:38:54.031423 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:38:54.031441 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:38:54.031455 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:38:54.031475 systemd-journald[178]: Journal started Apr 17 23:38:54.031502 systemd-journald[178]: Runtime Journal (/run/log/journal/09dbb46811ae4c66b9cccab86f751be9) is 8.0M, max 78.3M, 70.3M free. Apr 17 23:38:54.032308 systemd-modules-load[179]: Inserted module 'overlay' Apr 17 23:38:54.121610 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:38:54.121650 kernel: Bridge firewalling registered Apr 17 23:38:54.121663 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:38:54.067371 systemd-modules-load[179]: Inserted module 'br_netfilter' Apr 17 23:38:54.122763 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:38:54.124193 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:38:54.131328 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:38:54.134475 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:38:54.138503 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:38:54.142320 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:38:54.156999 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:38:54.181377 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:38:54.182400 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:38:54.191716 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:38:54.196308 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:38:54.198680 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:38:54.202308 dracut-cmdline[206]: dracut-dracut-053 Apr 17 23:38:54.207472 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:38:54.212998 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:38:54.224219 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:38:54.252926 systemd-resolved[219]: Positive Trust Anchors: Apr 17 23:38:54.252948 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:38:54.252979 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:38:54.256373 systemd-resolved[219]: Defaulting to hostname 'linux'. Apr 17 23:38:54.257520 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:38:54.260705 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:38:54.323233 kernel: SCSI subsystem initialized Apr 17 23:38:54.336207 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:38:54.350201 kernel: iscsi: registered transport (tcp) Apr 17 23:38:54.371798 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:38:54.371879 kernel: QLogic iSCSI HBA Driver Apr 17 23:38:54.425464 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:38:54.430320 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:38:54.476324 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:38:54.476410 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:38:54.478483 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:38:54.525211 kernel: raid6: avx2x4 gen() 21550 MB/s Apr 17 23:38:54.544274 kernel: raid6: avx2x2 gen() 19669 MB/s Apr 17 23:38:54.564718 kernel: raid6: avx2x1 gen() 7773 MB/s Apr 17 23:38:54.564780 kernel: raid6: using algorithm avx2x4 gen() 21550 MB/s Apr 17 23:38:54.585791 kernel: raid6: .... xor() 5242 MB/s, rmw enabled Apr 17 23:38:54.585871 kernel: raid6: using avx2x2 recovery algorithm Apr 17 23:38:54.613213 kernel: xor: automatically using best checksumming function avx Apr 17 23:38:54.779217 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:38:54.790212 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:38:54.799338 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:38:54.810984 systemd-udevd[395]: Using default interface naming scheme 'v255'. Apr 17 23:38:54.815724 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:38:54.823379 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:38:54.836751 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Apr 17 23:38:54.866982 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:38:54.873278 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:38:54.944832 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:38:54.955378 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:38:54.983453 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:38:54.987719 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:38:54.990475 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:38:54.992223 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:38:55.000319 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:38:55.012745 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:38:55.248219 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 23:38:55.254684 kernel: scsi host0: Virtio SCSI HBA Apr 17 23:38:55.258206 kernel: AVX2 version of gcm_enc/dec engaged. Apr 17 23:38:55.265520 kernel: AES CTR mode by8 optimization enabled Apr 17 23:38:55.271284 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 17 23:38:55.274463 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:38:55.275945 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:38:55.282278 kernel: libata version 3.00 loaded. Apr 17 23:38:55.279438 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:38:55.283231 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:38:55.283700 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:38:55.285520 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:38:55.299935 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:38:55.381271 kernel: ahci 0000:00:1f.2: version 3.0 Apr 17 23:38:55.381494 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 17 23:38:55.381508 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 17 23:38:55.381658 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 17 23:38:55.384188 kernel: scsi host1: ahci Apr 17 23:38:55.385191 kernel: scsi host2: ahci Apr 17 23:38:55.386681 kernel: scsi host3: ahci Apr 17 23:38:55.386865 kernel: scsi host4: ahci Apr 17 23:38:55.387031 kernel: scsi host5: ahci Apr 17 23:38:55.388228 kernel: scsi host6: ahci Apr 17 23:38:55.388397 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Apr 17 23:38:55.388408 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Apr 17 23:38:55.388418 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Apr 17 23:38:55.388428 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Apr 17 23:38:55.388587 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Apr 17 23:38:55.388596 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Apr 17 23:38:55.490534 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:38:55.502353 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:38:55.518695 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:38:55.699184 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 17 23:38:55.699249 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 17 23:38:55.700192 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 17 23:38:55.703187 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 17 23:38:55.707949 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 17 23:38:55.708185 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 17 23:38:55.720065 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 17 23:38:55.746373 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Apr 17 23:38:55.746726 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 17 23:38:55.746887 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 17 23:38:55.747042 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 17 23:38:55.756973 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:38:55.756994 kernel: GPT:9289727 != 167739391 Apr 17 23:38:55.757005 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:38:55.760645 kernel: GPT:9289727 != 167739391 Apr 17 23:38:55.760660 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:38:55.764468 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:38:55.765871 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 17 23:38:55.804295 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 17 23:38:55.812566 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (461) Apr 17 23:38:55.812586 kernel: BTRFS: device fsid 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 devid 1 transid 32 /dev/sda3 scanned by (udev-worker) (439) Apr 17 23:38:55.820743 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 17 23:38:55.825915 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 17 23:38:55.828050 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 17 23:38:55.832857 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 17 23:38:55.839273 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:38:55.845998 disk-uuid[566]: Primary Header is updated. Apr 17 23:38:55.845998 disk-uuid[566]: Secondary Entries is updated. Apr 17 23:38:55.845998 disk-uuid[566]: Secondary Header is updated. Apr 17 23:38:55.852192 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:38:55.858191 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:38:56.862216 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:38:56.863334 disk-uuid[567]: The operation has completed successfully. Apr 17 23:38:56.914501 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:38:56.914624 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:38:56.924294 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:38:56.927520 sh[581]: Success Apr 17 23:38:56.942272 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 17 23:38:56.984470 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:38:56.990946 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:38:56.993614 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:38:57.021791 kernel: BTRFS info (device dm-0): first mount of filesystem 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 Apr 17 23:38:57.021817 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:38:57.028078 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:38:57.028096 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:38:57.032751 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:38:57.040181 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 17 23:38:57.041785 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:38:57.042950 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:38:57.050279 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:38:57.054288 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:38:57.070114 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:38:57.070146 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:38:57.073719 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:38:57.082508 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:38:57.082708 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:38:57.094464 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:38:57.098646 kernel: BTRFS info (device sda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:38:57.105436 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:38:57.116267 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:38:57.159882 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:38:57.167324 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:38:57.192463 systemd-networkd[763]: lo: Link UP Apr 17 23:38:57.192474 systemd-networkd[763]: lo: Gained carrier Apr 17 23:38:57.194116 systemd-networkd[763]: Enumeration completed Apr 17 23:38:57.194289 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:38:57.195157 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:38:57.195161 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:38:57.197697 systemd[1]: Reached target network.target - Network. Apr 17 23:38:57.198219 systemd-networkd[763]: eth0: Link UP Apr 17 23:38:57.198223 systemd-networkd[763]: eth0: Gained carrier Apr 17 23:38:57.198231 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:38:57.209141 ignition[696]: Ignition 2.19.0 Apr 17 23:38:57.209181 ignition[696]: Stage: fetch-offline Apr 17 23:38:57.209225 ignition[696]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:38:57.209236 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 23:38:57.211396 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:38:57.209337 ignition[696]: parsed url from cmdline: "" Apr 17 23:38:57.209342 ignition[696]: no config URL provided Apr 17 23:38:57.209348 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:38:57.209357 ignition[696]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:38:57.209364 ignition[696]: failed to fetch config: resource requires networking Apr 17 23:38:57.210205 ignition[696]: Ignition finished successfully Apr 17 23:38:57.218311 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 17 23:38:57.231815 ignition[770]: Ignition 2.19.0 Apr 17 23:38:57.231830 ignition[770]: Stage: fetch Apr 17 23:38:57.232016 ignition[770]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:38:57.232032 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 23:38:57.232116 ignition[770]: parsed url from cmdline: "" Apr 17 23:38:57.232120 ignition[770]: no config URL provided Apr 17 23:38:57.232126 ignition[770]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:38:57.232135 ignition[770]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:38:57.232155 ignition[770]: PUT http://169.254.169.254/v1/token: attempt #1 Apr 17 23:38:57.232317 ignition[770]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 17 23:38:57.432477 ignition[770]: PUT http://169.254.169.254/v1/token: attempt #2 Apr 17 23:38:57.432678 ignition[770]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 17 23:38:57.833355 ignition[770]: PUT http://169.254.169.254/v1/token: attempt #3 Apr 17 23:38:57.833509 ignition[770]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 17 23:38:57.936236 systemd-networkd[763]: eth0: DHCPv4 address 172.238.189.76/24, gateway 172.238.189.1 acquired from 23.205.167.174 Apr 17 23:38:58.633688 ignition[770]: PUT http://169.254.169.254/v1/token: attempt #4 Apr 17 23:38:58.734104 ignition[770]: PUT result: OK Apr 17 23:38:58.734237 ignition[770]: GET http://169.254.169.254/v1/user-data: attempt #1 Apr 17 23:38:58.844701 ignition[770]: GET result: OK Apr 17 23:38:58.844788 ignition[770]: parsing config with SHA512: cbc603e2a4a65d9dca998e1c79ea7285016c78f29b60c0631f1836d6ffa9bc14c04a59d71fb3b6527d3add2984a15d10f4ad6d8c429ffcf7a44a94079b12695e Apr 17 23:38:58.848297 unknown[770]: fetched base config from "system" Apr 17 23:38:58.848566 ignition[770]: fetch: fetch complete Apr 17 23:38:58.848306 unknown[770]: fetched base config from "system" Apr 17 23:38:58.848572 ignition[770]: fetch: fetch passed Apr 17 23:38:58.848313 unknown[770]: fetched user config from "akamai" Apr 17 23:38:58.848802 ignition[770]: Ignition finished successfully Apr 17 23:38:58.852754 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 17 23:38:58.859303 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:38:58.872937 ignition[778]: Ignition 2.19.0 Apr 17 23:38:58.872948 ignition[778]: Stage: kargs Apr 17 23:38:58.873090 ignition[778]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:38:58.876463 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:38:58.873101 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 23:38:58.873937 ignition[778]: kargs: kargs passed Apr 17 23:38:58.873974 ignition[778]: Ignition finished successfully Apr 17 23:38:58.882331 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:38:58.894944 ignition[784]: Ignition 2.19.0 Apr 17 23:38:58.894954 ignition[784]: Stage: disks Apr 17 23:38:58.895084 ignition[784]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:38:58.895096 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 23:38:58.897283 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:38:58.895751 ignition[784]: disks: disks passed Apr 17 23:38:58.920402 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:38:58.895791 ignition[784]: Ignition finished successfully Apr 17 23:38:58.921749 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:38:58.923203 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:38:58.924649 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:38:58.926274 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:38:58.939328 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:38:58.954512 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 17 23:38:58.956929 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:38:58.963246 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:38:59.049204 kernel: EXT4-fs (sda9): mounted filesystem d3c199f8-8065-4f33-a75b-da2f09d4fc39 r/w with ordered data mode. Quota mode: none. Apr 17 23:38:59.050276 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:38:59.051504 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:38:59.061271 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:38:59.064284 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:38:59.066277 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 23:38:59.066323 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:38:59.066382 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:38:59.073088 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:38:59.075789 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (800) Apr 17 23:38:59.082204 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:38:59.082243 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:38:59.082255 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:38:59.093543 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:38:59.093573 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:38:59.101344 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:38:59.103504 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:38:59.150718 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:38:59.156435 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:38:59.161764 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:38:59.168010 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:38:59.169448 systemd-networkd[763]: eth0: Gained IPv6LL Apr 17 23:38:59.263400 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:38:59.269265 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:38:59.274269 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:38:59.278564 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:38:59.281981 kernel: BTRFS info (device sda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:38:59.303519 ignition[914]: INFO : Ignition 2.19.0 Apr 17 23:38:59.305190 ignition[914]: INFO : Stage: mount Apr 17 23:38:59.305190 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:38:59.305190 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 23:38:59.307270 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:38:59.309651 ignition[914]: INFO : mount: mount passed Apr 17 23:38:59.309651 ignition[914]: INFO : Ignition finished successfully Apr 17 23:38:59.311056 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:38:59.317245 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:39:00.057284 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:39:00.072197 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (926) Apr 17 23:39:00.072256 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:39:00.077796 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:39:00.077817 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:39:00.087501 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:39:00.087699 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:39:00.090341 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:39:00.117125 ignition[942]: INFO : Ignition 2.19.0 Apr 17 23:39:00.118226 ignition[942]: INFO : Stage: files Apr 17 23:39:00.118917 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:39:00.118917 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 23:39:00.121059 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:39:00.121059 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:39:00.121059 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:39:00.124495 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:39:00.125520 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:39:00.126955 unknown[942]: wrote ssh authorized keys file for user: core Apr 17 23:39:00.128097 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:39:00.129115 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:39:00.129115 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 23:39:00.448106 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 23:39:00.555899 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:39:00.557629 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:39:00.557629 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:39:00.557629 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:39:00.557629 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:39:00.557629 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:39:00.557629 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:39:00.557629 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:39:00.557629 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:39:00.557629 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:39:00.557629 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:39:00.557629 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:39:00.557629 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:39:00.557629 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:39:00.557629 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 17 23:39:01.119096 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 17 23:39:01.470017 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:39:01.470017 ignition[942]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 17 23:39:01.474263 ignition[942]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:39:01.474263 ignition[942]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:39:01.474263 ignition[942]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 17 23:39:01.474263 ignition[942]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 17 23:39:01.474263 ignition[942]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 17 23:39:01.474263 ignition[942]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 17 23:39:01.474263 ignition[942]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 17 23:39:01.474263 ignition[942]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:39:01.474263 ignition[942]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:39:01.474263 ignition[942]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:39:01.474263 ignition[942]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:39:01.474263 ignition[942]: INFO : files: files passed Apr 17 23:39:01.474263 ignition[942]: INFO : Ignition finished successfully Apr 17 23:39:01.475017 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:39:01.505401 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:39:01.510302 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:39:01.513814 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:39:01.513938 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:39:01.528056 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:39:01.528056 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:39:01.530703 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:39:01.534316 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:39:01.536890 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:39:01.543359 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:39:01.567370 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:39:01.567488 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:39:01.569823 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:39:01.570928 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:39:01.572626 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:39:01.584321 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:39:01.597009 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:39:01.602318 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:39:01.611464 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:39:01.612310 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:39:01.613181 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:39:01.614761 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:39:01.614860 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:39:01.616816 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:39:01.617861 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:39:01.619264 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:39:01.620860 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:39:01.622346 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:39:01.623796 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:39:01.625563 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:39:01.627262 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:39:01.629084 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:39:01.630646 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:39:01.632224 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:39:01.632321 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:39:01.634285 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:39:01.635352 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:39:01.636730 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:39:01.636828 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:39:01.638221 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:39:01.638315 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:39:01.640573 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:39:01.640679 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:39:01.641783 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:39:01.641882 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:39:01.654307 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:39:01.658349 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:39:01.659085 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:39:01.659265 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:39:01.661449 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:39:01.661591 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:39:01.673590 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:39:01.673890 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:39:01.679012 ignition[995]: INFO : Ignition 2.19.0 Apr 17 23:39:01.679012 ignition[995]: INFO : Stage: umount Apr 17 23:39:01.679012 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:39:01.679012 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 23:39:01.679012 ignition[995]: INFO : umount: umount passed Apr 17 23:39:01.679012 ignition[995]: INFO : Ignition finished successfully Apr 17 23:39:01.679539 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:39:01.679661 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:39:01.683567 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:39:01.683623 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:39:01.687490 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:39:01.687543 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:39:01.688314 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 17 23:39:01.688367 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 17 23:39:01.689121 systemd[1]: Stopped target network.target - Network. Apr 17 23:39:01.689831 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:39:01.689887 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:39:01.692214 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:39:01.693245 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:39:01.697489 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:39:01.698706 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:39:01.722271 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:39:01.723844 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:39:01.723903 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:39:01.725512 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:39:01.725559 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:39:01.726944 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:39:01.727004 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:39:01.728593 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:39:01.728645 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:39:01.730484 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:39:01.732270 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:39:01.735067 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:39:01.735650 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:39:01.735766 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:39:01.736204 systemd-networkd[763]: eth0: DHCPv6 lease lost Apr 17 23:39:01.738746 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:39:01.738861 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:39:01.740372 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:39:01.740495 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:39:01.744639 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:39:01.744684 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:39:01.745811 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:39:01.745869 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:39:01.755052 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:39:01.756002 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:39:01.756059 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:39:01.757825 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:39:01.757877 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:39:01.759478 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:39:01.759529 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:39:01.761098 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:39:01.761147 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:39:01.762708 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:39:01.781357 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:39:01.781554 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:39:01.784499 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:39:01.784609 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:39:01.786256 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:39:01.786327 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:39:01.787981 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:39:01.788027 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:39:01.789558 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:39:01.789611 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:39:01.791785 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:39:01.791835 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:39:01.793386 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:39:01.793440 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:39:01.801636 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:39:01.802589 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:39:01.802647 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:39:01.803490 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 17 23:39:01.803543 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:39:01.809974 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:39:01.810028 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:39:01.810822 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:39:01.810872 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:39:01.812232 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:39:01.812336 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:39:01.813771 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:39:01.821370 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:39:01.829509 systemd[1]: Switching root. Apr 17 23:39:01.866082 systemd-journald[178]: Journal stopped Apr 17 23:38:53.998993 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 17 22:11:20 -00 2026 Apr 17 23:38:53.999030 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:38:53.999044 kernel: BIOS-provided physical RAM map: Apr 17 23:38:53.999055 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Apr 17 23:38:53.999064 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Apr 17 23:38:53.999080 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 17 23:38:53.999092 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Apr 17 23:38:53.999102 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Apr 17 23:38:53.999112 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 17 23:38:53.999123 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 17 23:38:53.999133 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 17 23:38:53.999143 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 17 23:38:53.999153 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Apr 17 23:38:54.001201 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 17 23:38:54.001223 kernel: NX (Execute Disable) protection: active Apr 17 23:38:54.001235 kernel: APIC: Static calls initialized Apr 17 23:38:54.001246 kernel: SMBIOS 2.8 present. Apr 17 23:38:54.001257 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Apr 17 23:38:54.001268 kernel: Hypervisor detected: KVM Apr 17 23:38:54.001285 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 23:38:54.001295 kernel: kvm-clock: using sched offset of 5740523890 cycles Apr 17 23:38:54.001306 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 23:38:54.001317 kernel: tsc: Detected 2000.000 MHz processor Apr 17 23:38:54.001329 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 23:38:54.001341 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 23:38:54.001352 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Apr 17 23:38:54.001363 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 17 23:38:54.001375 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 23:38:54.001390 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Apr 17 23:38:54.001401 kernel: Using GB pages for direct mapping Apr 17 23:38:54.001413 kernel: ACPI: Early table checksum verification disabled Apr 17 23:38:54.001423 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Apr 17 23:38:54.001434 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:38:54.001445 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:38:54.001456 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:38:54.001467 kernel: ACPI: FACS 0x000000007FFE0000 000040 Apr 17 23:38:54.001478 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:38:54.001495 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:38:54.001507 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:38:54.001518 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:38:54.001537 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Apr 17 23:38:54.001549 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Apr 17 23:38:54.001561 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Apr 17 23:38:54.001578 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Apr 17 23:38:54.001590 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Apr 17 23:38:54.001602 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Apr 17 23:38:54.001614 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Apr 17 23:38:54.001626 kernel: No NUMA configuration found Apr 17 23:38:54.001638 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Apr 17 23:38:54.001650 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] Apr 17 23:38:54.001662 kernel: Zone ranges: Apr 17 23:38:54.001679 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 23:38:54.001691 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 17 23:38:54.001703 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Apr 17 23:38:54.001715 kernel: Movable zone start for each node Apr 17 23:38:54.001727 kernel: Early memory node ranges Apr 17 23:38:54.001740 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 17 23:38:54.001752 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Apr 17 23:38:54.001764 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Apr 17 23:38:54.001777 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Apr 17 23:38:54.001789 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:38:54.001805 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 17 23:38:54.001817 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Apr 17 23:38:54.001829 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 17 23:38:54.001841 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 23:38:54.001854 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 17 23:38:54.001866 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 17 23:38:54.001878 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 23:38:54.001889 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 23:38:54.001901 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 23:38:54.001918 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 23:38:54.001930 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 23:38:54.001942 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 17 23:38:54.001955 kernel: TSC deadline timer available Apr 17 23:38:54.001967 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 17 23:38:54.001979 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 17 23:38:54.001991 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 17 23:38:54.002002 kernel: kvm-guest: setup PV sched yield Apr 17 23:38:54.002014 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 17 23:38:54.002031 kernel: Booting paravirtualized kernel on KVM Apr 17 23:38:54.002043 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 23:38:54.002056 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 17 23:38:54.002068 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 17 23:38:54.002080 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 17 23:38:54.002092 kernel: pcpu-alloc: [0] 0 1 Apr 17 23:38:54.002104 kernel: kvm-guest: PV spinlocks enabled Apr 17 23:38:54.002116 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 23:38:54.002130 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:38:54.002147 kernel: random: crng init done Apr 17 23:38:54.002159 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 23:38:54.002231 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 23:38:54.002246 kernel: Fallback order for Node 0: 0 Apr 17 23:38:54.002259 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Apr 17 23:38:54.002271 kernel: Policy zone: Normal Apr 17 23:38:54.002284 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:38:54.002296 kernel: software IO TLB: area num 2. Apr 17 23:38:54.002314 kernel: Memory: 3966220K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 227292K reserved, 0K cma-reserved) Apr 17 23:38:54.002326 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 17 23:38:54.002338 kernel: ftrace: allocating 37996 entries in 149 pages Apr 17 23:38:54.002349 kernel: ftrace: allocated 149 pages with 4 groups Apr 17 23:38:54.002361 kernel: Dynamic Preempt: voluntary Apr 17 23:38:54.002373 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:38:54.002386 kernel: rcu: RCU event tracing is enabled. Apr 17 23:38:54.002398 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 17 23:38:54.002410 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:38:54.002427 kernel: Rude variant of Tasks RCU enabled. Apr 17 23:38:54.002439 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:38:54.002451 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:38:54.002463 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 17 23:38:54.002474 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 17 23:38:54.002486 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:38:54.002498 kernel: Console: colour VGA+ 80x25 Apr 17 23:38:54.002510 kernel: printk: console [tty0] enabled Apr 17 23:38:54.002522 kernel: printk: console [ttyS0] enabled Apr 17 23:38:54.002538 kernel: ACPI: Core revision 20230628 Apr 17 23:38:54.002551 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 17 23:38:54.002563 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 23:38:54.002575 kernel: x2apic enabled Apr 17 23:38:54.002601 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 23:38:54.002616 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 17 23:38:54.002629 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 17 23:38:54.002641 kernel: kvm-guest: setup PV IPIs Apr 17 23:38:54.002654 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 17 23:38:54.002666 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 17 23:38:54.002679 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Apr 17 23:38:54.002690 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 17 23:38:54.002707 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 17 23:38:54.002720 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 17 23:38:54.002732 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 23:38:54.002745 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 23:38:54.002757 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 23:38:54.002774 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 17 23:38:54.002787 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 17 23:38:54.002799 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 17 23:38:54.002812 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 17 23:38:54.002825 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 17 23:38:54.002838 kernel: active return thunk: srso_alias_return_thunk Apr 17 23:38:54.002851 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 17 23:38:54.002863 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 17 23:38:54.002880 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:38:54.002893 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 23:38:54.002906 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 23:38:54.002919 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 23:38:54.002931 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 17 23:38:54.002945 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 23:38:54.002958 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Apr 17 23:38:54.002971 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Apr 17 23:38:54.002984 kernel: Freeing SMP alternatives memory: 32K Apr 17 23:38:54.003002 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:38:54.003014 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:38:54.003026 kernel: landlock: Up and running. Apr 17 23:38:54.003038 kernel: SELinux: Initializing. Apr 17 23:38:54.003051 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:38:54.003062 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:38:54.003074 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Apr 17 23:38:54.003087 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:38:54.003100 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:38:54.003118 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:38:54.003130 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 17 23:38:54.003142 kernel: ... version: 0 Apr 17 23:38:54.003154 kernel: ... bit width: 48 Apr 17 23:38:54.003188 kernel: ... generic registers: 6 Apr 17 23:38:54.003203 kernel: ... value mask: 0000ffffffffffff Apr 17 23:38:54.003215 kernel: ... max period: 00007fffffffffff Apr 17 23:38:54.003227 kernel: ... fixed-purpose events: 0 Apr 17 23:38:54.003239 kernel: ... event mask: 000000000000003f Apr 17 23:38:54.003256 kernel: signal: max sigframe size: 3376 Apr 17 23:38:54.003269 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:38:54.003283 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:38:54.003295 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:38:54.003308 kernel: smpboot: x86: Booting SMP configuration: Apr 17 23:38:54.003321 kernel: .... node #0, CPUs: #1 Apr 17 23:38:54.003333 kernel: smp: Brought up 1 node, 2 CPUs Apr 17 23:38:54.003346 kernel: smpboot: Max logical packages: 1 Apr 17 23:38:54.003358 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Apr 17 23:38:54.003376 kernel: devtmpfs: initialized Apr 17 23:38:54.003389 kernel: x86/mm: Memory block size: 128MB Apr 17 23:38:54.003402 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:38:54.003415 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 17 23:38:54.003427 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:38:54.003440 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:38:54.003452 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:38:54.003464 kernel: audit: type=2000 audit(1776469133.258:1): state=initialized audit_enabled=0 res=1 Apr 17 23:38:54.003475 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:38:54.003493 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 23:38:54.003506 kernel: cpuidle: using governor menu Apr 17 23:38:54.003518 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:38:54.003531 kernel: dca service started, version 1.12.1 Apr 17 23:38:54.003543 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 17 23:38:54.003555 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 17 23:38:54.003568 kernel: PCI: Using configuration type 1 for base access Apr 17 23:38:54.003581 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 23:38:54.003593 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:38:54.003610 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:38:54.003623 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:38:54.003636 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:38:54.003648 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:38:54.003661 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:38:54.003674 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:38:54.003686 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 23:38:54.003699 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 17 23:38:54.003711 kernel: ACPI: Interpreter enabled Apr 17 23:38:54.003727 kernel: ACPI: PM: (supports S0 S3 S5) Apr 17 23:38:54.003740 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 23:38:54.003753 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 23:38:54.003765 kernel: PCI: Using E820 reservations for host bridge windows Apr 17 23:38:54.003778 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 17 23:38:54.003791 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 23:38:54.004081 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:38:54.006372 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 17 23:38:54.006595 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 17 23:38:54.006615 kernel: PCI host bridge to bus 0000:00 Apr 17 23:38:54.006824 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 23:38:54.007015 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 23:38:54.007232 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 23:38:54.007424 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Apr 17 23:38:54.007616 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 17 23:38:54.007815 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Apr 17 23:38:54.008003 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 23:38:54.010279 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 17 23:38:54.010519 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 17 23:38:54.010735 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 17 23:38:54.010945 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 17 23:38:54.011159 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 17 23:38:54.011403 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 17 23:38:54.011630 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Apr 17 23:38:54.011846 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Apr 17 23:38:54.012056 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 17 23:38:54.014317 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 17 23:38:54.014543 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 17 23:38:54.014758 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 17 23:38:54.014968 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 17 23:38:54.015207 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 17 23:38:54.015421 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 17 23:38:54.015646 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 17 23:38:54.015867 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 17 23:38:54.016090 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 17 23:38:54.020030 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Apr 17 23:38:54.020295 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Apr 17 23:38:54.020522 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 17 23:38:54.020730 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 17 23:38:54.020751 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 23:38:54.020765 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 23:38:54.020779 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 23:38:54.020800 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 23:38:54.020813 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 17 23:38:54.020825 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 17 23:38:54.020838 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 17 23:38:54.020850 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 17 23:38:54.020862 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 17 23:38:54.020876 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 17 23:38:54.020888 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 17 23:38:54.020901 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 17 23:38:54.020919 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 17 23:38:54.020932 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 17 23:38:54.020945 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 17 23:38:54.020957 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 17 23:38:54.020970 kernel: iommu: Default domain type: Translated Apr 17 23:38:54.020982 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 23:38:54.020994 kernel: PCI: Using ACPI for IRQ routing Apr 17 23:38:54.021007 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 23:38:54.021020 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Apr 17 23:38:54.021038 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Apr 17 23:38:54.022924 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 17 23:38:54.023139 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 17 23:38:54.023377 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 17 23:38:54.023398 kernel: vgaarb: loaded Apr 17 23:38:54.023412 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 17 23:38:54.023426 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 17 23:38:54.023438 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 23:38:54.023458 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:38:54.023471 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:38:54.023484 kernel: pnp: PnP ACPI init Apr 17 23:38:54.023706 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 17 23:38:54.023728 kernel: pnp: PnP ACPI: found 5 devices Apr 17 23:38:54.023742 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 23:38:54.023754 kernel: NET: Registered PF_INET protocol family Apr 17 23:38:54.023767 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 23:38:54.023785 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 17 23:38:54.023798 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:38:54.023811 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 23:38:54.023824 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 17 23:38:54.023837 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 17 23:38:54.023849 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:38:54.023862 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:38:54.023875 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:38:54.023888 kernel: NET: Registered PF_XDP protocol family Apr 17 23:38:54.024087 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 23:38:54.024331 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 23:38:54.024518 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 23:38:54.024708 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Apr 17 23:38:54.024900 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 17 23:38:54.025090 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Apr 17 23:38:54.025112 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:38:54.025126 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 17 23:38:54.025145 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Apr 17 23:38:54.025157 kernel: Initialise system trusted keyrings Apr 17 23:38:54.027209 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 17 23:38:54.027228 kernel: Key type asymmetric registered Apr 17 23:38:54.027241 kernel: Asymmetric key parser 'x509' registered Apr 17 23:38:54.027253 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 17 23:38:54.027266 kernel: io scheduler mq-deadline registered Apr 17 23:38:54.027278 kernel: io scheduler kyber registered Apr 17 23:38:54.027290 kernel: io scheduler bfq registered Apr 17 23:38:54.027302 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 23:38:54.027323 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 17 23:38:54.027336 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 17 23:38:54.027349 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:38:54.027362 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 23:38:54.027375 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 23:38:54.027388 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 23:38:54.027400 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 23:38:54.027413 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 17 23:38:54.027649 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 17 23:38:54.027863 kernel: rtc_cmos 00:03: registered as rtc0 Apr 17 23:38:54.028067 kernel: rtc_cmos 00:03: setting system clock to 2026-04-17T23:38:53 UTC (1776469133) Apr 17 23:38:54.029345 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 17 23:38:54.029368 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 17 23:38:54.029383 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:38:54.029395 kernel: Segment Routing with IPv6 Apr 17 23:38:54.029408 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:38:54.029427 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:38:54.029440 kernel: Key type dns_resolver registered Apr 17 23:38:54.029452 kernel: IPI shorthand broadcast: enabled Apr 17 23:38:54.029465 kernel: sched_clock: Marking stable (903002740, 330166200)->(1365113170, -131944230) Apr 17 23:38:54.029477 kernel: registered taskstats version 1 Apr 17 23:38:54.029490 kernel: Loading compiled-in X.509 certificates Apr 17 23:38:54.029502 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 39e9969c7f49062f0fc1d1fb72e8f874436eb94f' Apr 17 23:38:54.029515 kernel: Key type .fscrypt registered Apr 17 23:38:54.029527 kernel: Key type fscrypt-provisioning registered Apr 17 23:38:54.029546 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:38:54.029559 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:38:54.029572 kernel: ima: No architecture policies found Apr 17 23:38:54.029585 kernel: clk: Disabling unused clocks Apr 17 23:38:54.029597 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 17 23:38:54.029610 kernel: Write protecting the kernel read-only data: 36864k Apr 17 23:38:54.029622 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 17 23:38:54.029635 kernel: Run /init as init process Apr 17 23:38:54.029648 kernel: with arguments: Apr 17 23:38:54.029665 kernel: /init Apr 17 23:38:54.029678 kernel: with environment: Apr 17 23:38:54.029690 kernel: HOME=/ Apr 17 23:38:54.029702 kernel: TERM=linux Apr 17 23:38:54.029717 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:38:54.029734 systemd[1]: Detected virtualization kvm. Apr 17 23:38:54.029747 systemd[1]: Detected architecture x86-64. Apr 17 23:38:54.029760 systemd[1]: Running in initrd. Apr 17 23:38:54.029778 systemd[1]: No hostname configured, using default hostname. Apr 17 23:38:54.029791 systemd[1]: Hostname set to . Apr 17 23:38:54.029805 systemd[1]: Initializing machine ID from random generator. Apr 17 23:38:54.029818 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:38:54.029833 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:38:54.029873 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:38:54.029895 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:38:54.029909 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:38:54.029923 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:38:54.029937 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:38:54.029952 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:38:54.029965 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:38:54.029984 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:38:54.029998 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:38:54.030012 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:38:54.030026 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:38:54.030039 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:38:54.030053 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:38:54.030067 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:38:54.030081 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:38:54.030095 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:38:54.030116 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:38:54.030130 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:38:54.030145 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:38:54.030159 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:38:54.031222 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:38:54.031240 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:38:54.031254 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:38:54.031268 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:38:54.031282 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:38:54.031304 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:38:54.031317 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:38:54.031367 systemd-journald[178]: Collecting audit messages is disabled. Apr 17 23:38:54.031402 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:38:54.031423 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:38:54.031441 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:38:54.031455 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:38:54.031475 systemd-journald[178]: Journal started Apr 17 23:38:54.031502 systemd-journald[178]: Runtime Journal (/run/log/journal/09dbb46811ae4c66b9cccab86f751be9) is 8.0M, max 78.3M, 70.3M free. Apr 17 23:38:54.032308 systemd-modules-load[179]: Inserted module 'overlay' Apr 17 23:38:54.121610 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:38:54.121650 kernel: Bridge firewalling registered Apr 17 23:38:54.121663 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:38:54.067371 systemd-modules-load[179]: Inserted module 'br_netfilter' Apr 17 23:38:54.122763 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:38:54.124193 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:38:54.131328 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:38:54.134475 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:38:54.138503 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:38:54.142320 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:38:54.156999 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:38:54.181377 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:38:54.182400 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:38:54.191716 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:38:54.196308 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:38:54.198680 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:38:54.202308 dracut-cmdline[206]: dracut-dracut-053 Apr 17 23:38:54.207472 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:38:54.212998 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:38:54.224219 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:38:54.252926 systemd-resolved[219]: Positive Trust Anchors: Apr 17 23:38:54.252948 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:38:54.252979 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:38:54.256373 systemd-resolved[219]: Defaulting to hostname 'linux'. Apr 17 23:38:54.257520 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:38:54.260705 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:38:54.323233 kernel: SCSI subsystem initialized Apr 17 23:38:54.336207 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:38:54.350201 kernel: iscsi: registered transport (tcp) Apr 17 23:38:54.371798 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:38:54.371879 kernel: QLogic iSCSI HBA Driver Apr 17 23:38:54.425464 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:38:54.430320 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:38:54.476324 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:38:54.476410 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:38:54.478483 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:38:54.525211 kernel: raid6: avx2x4 gen() 21550 MB/s Apr 17 23:38:54.544274 kernel: raid6: avx2x2 gen() 19669 MB/s Apr 17 23:38:54.564718 kernel: raid6: avx2x1 gen() 7773 MB/s Apr 17 23:38:54.564780 kernel: raid6: using algorithm avx2x4 gen() 21550 MB/s Apr 17 23:38:54.585791 kernel: raid6: .... xor() 5242 MB/s, rmw enabled Apr 17 23:38:54.585871 kernel: raid6: using avx2x2 recovery algorithm Apr 17 23:38:54.613213 kernel: xor: automatically using best checksumming function avx Apr 17 23:38:54.779217 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:38:54.790212 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:38:54.799338 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:38:54.810984 systemd-udevd[395]: Using default interface naming scheme 'v255'. Apr 17 23:38:54.815724 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:38:54.823379 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:38:54.836751 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Apr 17 23:38:54.866982 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:38:54.873278 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:38:54.944832 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:38:54.955378 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:38:54.983453 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:38:54.987719 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:38:54.990475 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:38:54.992223 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:38:55.000319 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:38:55.012745 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:38:55.248219 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 23:38:55.254684 kernel: scsi host0: Virtio SCSI HBA Apr 17 23:38:55.258206 kernel: AVX2 version of gcm_enc/dec engaged. Apr 17 23:38:55.265520 kernel: AES CTR mode by8 optimization enabled Apr 17 23:38:55.271284 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 17 23:38:55.274463 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:38:55.275945 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:38:55.282278 kernel: libata version 3.00 loaded. Apr 17 23:38:55.279438 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:38:55.283231 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:38:55.283700 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:38:55.285520 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:38:55.299935 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:38:55.381271 kernel: ahci 0000:00:1f.2: version 3.0 Apr 17 23:38:55.381494 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 17 23:38:55.381508 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 17 23:38:55.381658 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 17 23:38:55.384188 kernel: scsi host1: ahci Apr 17 23:38:55.385191 kernel: scsi host2: ahci Apr 17 23:38:55.386681 kernel: scsi host3: ahci Apr 17 23:38:55.386865 kernel: scsi host4: ahci Apr 17 23:38:55.387031 kernel: scsi host5: ahci Apr 17 23:38:55.388228 kernel: scsi host6: ahci Apr 17 23:38:55.388397 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Apr 17 23:38:55.388408 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Apr 17 23:38:55.388418 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Apr 17 23:38:55.388428 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Apr 17 23:38:55.388587 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Apr 17 23:38:55.388596 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Apr 17 23:38:55.490534 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:38:55.502353 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:38:55.518695 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:38:55.699184 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 17 23:38:55.699249 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 17 23:38:55.700192 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 17 23:38:55.703187 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 17 23:38:55.707949 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 17 23:38:55.708185 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 17 23:38:55.720065 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 17 23:38:55.746373 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Apr 17 23:38:55.746726 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 17 23:38:55.746887 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 17 23:38:55.747042 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 17 23:38:55.756973 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:38:55.756994 kernel: GPT:9289727 != 167739391 Apr 17 23:38:55.757005 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:38:55.760645 kernel: GPT:9289727 != 167739391 Apr 17 23:38:55.760660 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:38:55.764468 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:38:55.765871 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 17 23:38:55.804295 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 17 23:38:55.812566 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (461) Apr 17 23:38:55.812586 kernel: BTRFS: device fsid 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 devid 1 transid 32 /dev/sda3 scanned by (udev-worker) (439) Apr 17 23:38:55.820743 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 17 23:38:55.825915 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 17 23:38:55.828050 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 17 23:38:55.832857 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 17 23:38:55.839273 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:38:55.845998 disk-uuid[566]: Primary Header is updated. Apr 17 23:38:55.845998 disk-uuid[566]: Secondary Entries is updated. Apr 17 23:38:55.845998 disk-uuid[566]: Secondary Header is updated. Apr 17 23:38:55.852192 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:38:55.858191 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:38:56.862216 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:38:56.863334 disk-uuid[567]: The operation has completed successfully. Apr 17 23:38:56.914501 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:38:56.914624 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:38:56.924294 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:38:56.927520 sh[581]: Success Apr 17 23:38:56.942272 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 17 23:38:56.984470 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:38:56.990946 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:38:56.993614 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:38:57.021791 kernel: BTRFS info (device dm-0): first mount of filesystem 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 Apr 17 23:38:57.021817 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:38:57.028078 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:38:57.028096 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:38:57.032751 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:38:57.040181 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 17 23:38:57.041785 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:38:57.042950 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:38:57.050279 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:38:57.054288 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:38:57.070114 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:38:57.070146 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:38:57.073719 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:38:57.082508 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:38:57.082708 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:38:57.094464 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:38:57.098646 kernel: BTRFS info (device sda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:38:57.105436 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:38:57.116267 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:38:57.159882 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:38:57.167324 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:38:57.192463 systemd-networkd[763]: lo: Link UP Apr 17 23:38:57.192474 systemd-networkd[763]: lo: Gained carrier Apr 17 23:38:57.194116 systemd-networkd[763]: Enumeration completed Apr 17 23:38:57.194289 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:38:57.195157 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:38:57.195161 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:38:57.197697 systemd[1]: Reached target network.target - Network. Apr 17 23:38:57.198219 systemd-networkd[763]: eth0: Link UP Apr 17 23:38:57.198223 systemd-networkd[763]: eth0: Gained carrier Apr 17 23:38:57.198231 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:38:57.209141 ignition[696]: Ignition 2.19.0 Apr 17 23:38:57.209181 ignition[696]: Stage: fetch-offline Apr 17 23:38:57.209225 ignition[696]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:38:57.209236 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 23:38:57.211396 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:38:57.209337 ignition[696]: parsed url from cmdline: "" Apr 17 23:38:57.209342 ignition[696]: no config URL provided Apr 17 23:38:57.209348 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:38:57.209357 ignition[696]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:38:57.209364 ignition[696]: failed to fetch config: resource requires networking Apr 17 23:38:57.210205 ignition[696]: Ignition finished successfully Apr 17 23:38:57.218311 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 17 23:38:57.231815 ignition[770]: Ignition 2.19.0 Apr 17 23:38:57.231830 ignition[770]: Stage: fetch Apr 17 23:38:57.232016 ignition[770]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:38:57.232032 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 23:38:57.232116 ignition[770]: parsed url from cmdline: "" Apr 17 23:38:57.232120 ignition[770]: no config URL provided Apr 17 23:38:57.232126 ignition[770]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:38:57.232135 ignition[770]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:38:57.232155 ignition[770]: PUT http://169.254.169.254/v1/token: attempt #1 Apr 17 23:38:57.232317 ignition[770]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 17 23:38:57.432477 ignition[770]: PUT http://169.254.169.254/v1/token: attempt #2 Apr 17 23:38:57.432678 ignition[770]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 17 23:38:57.833355 ignition[770]: PUT http://169.254.169.254/v1/token: attempt #3 Apr 17 23:38:57.833509 ignition[770]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 17 23:38:57.936236 systemd-networkd[763]: eth0: DHCPv4 address 172.238.189.76/24, gateway 172.238.189.1 acquired from 23.205.167.174 Apr 17 23:38:58.633688 ignition[770]: PUT http://169.254.169.254/v1/token: attempt #4 Apr 17 23:38:58.734104 ignition[770]: PUT result: OK Apr 17 23:38:58.734237 ignition[770]: GET http://169.254.169.254/v1/user-data: attempt #1 Apr 17 23:38:58.844701 ignition[770]: GET result: OK Apr 17 23:38:58.844788 ignition[770]: parsing config with SHA512: cbc603e2a4a65d9dca998e1c79ea7285016c78f29b60c0631f1836d6ffa9bc14c04a59d71fb3b6527d3add2984a15d10f4ad6d8c429ffcf7a44a94079b12695e Apr 17 23:38:58.848297 unknown[770]: fetched base config from "system" Apr 17 23:38:58.848566 ignition[770]: fetch: fetch complete Apr 17 23:38:58.848306 unknown[770]: fetched base config from "system" Apr 17 23:38:58.848572 ignition[770]: fetch: fetch passed Apr 17 23:38:58.848313 unknown[770]: fetched user config from "akamai" Apr 17 23:38:58.848802 ignition[770]: Ignition finished successfully Apr 17 23:38:58.852754 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 17 23:38:58.859303 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:38:58.872937 ignition[778]: Ignition 2.19.0 Apr 17 23:38:58.872948 ignition[778]: Stage: kargs Apr 17 23:38:58.873090 ignition[778]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:38:58.876463 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:38:58.873101 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 23:38:58.873937 ignition[778]: kargs: kargs passed Apr 17 23:38:58.873974 ignition[778]: Ignition finished successfully Apr 17 23:38:58.882331 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:38:58.894944 ignition[784]: Ignition 2.19.0 Apr 17 23:38:58.894954 ignition[784]: Stage: disks Apr 17 23:38:58.895084 ignition[784]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:38:58.895096 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 23:38:58.897283 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:38:58.895751 ignition[784]: disks: disks passed Apr 17 23:38:58.920402 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:38:58.895791 ignition[784]: Ignition finished successfully Apr 17 23:38:58.921749 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:38:58.923203 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:38:58.924649 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:38:58.926274 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:38:58.939328 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:38:58.954512 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 17 23:38:58.956929 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:38:58.963246 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:38:59.049204 kernel: EXT4-fs (sda9): mounted filesystem d3c199f8-8065-4f33-a75b-da2f09d4fc39 r/w with ordered data mode. Quota mode: none. Apr 17 23:38:59.050276 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:38:59.051504 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:38:59.061271 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:38:59.064284 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:38:59.066277 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 23:38:59.066323 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:38:59.066382 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:38:59.073088 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:38:59.075789 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (800) Apr 17 23:38:59.082204 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:38:59.082243 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:38:59.082255 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:38:59.093543 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:38:59.093573 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:38:59.101344 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:38:59.103504 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:38:59.150718 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:38:59.156435 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:38:59.161764 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:38:59.168010 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:38:59.169448 systemd-networkd[763]: eth0: Gained IPv6LL Apr 17 23:38:59.263400 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:38:59.269265 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:38:59.274269 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:38:59.278564 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:38:59.281981 kernel: BTRFS info (device sda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:38:59.303519 ignition[914]: INFO : Ignition 2.19.0 Apr 17 23:38:59.305190 ignition[914]: INFO : Stage: mount Apr 17 23:38:59.305190 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:38:59.305190 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 23:38:59.307270 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:38:59.309651 ignition[914]: INFO : mount: mount passed Apr 17 23:38:59.309651 ignition[914]: INFO : Ignition finished successfully Apr 17 23:38:59.311056 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:38:59.317245 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:39:00.057284 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:39:00.072197 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (926) Apr 17 23:39:00.072256 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:39:00.077796 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:39:00.077817 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:39:00.087501 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:39:00.087699 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:39:00.090341 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:39:00.117125 ignition[942]: INFO : Ignition 2.19.0 Apr 17 23:39:00.118226 ignition[942]: INFO : Stage: files Apr 17 23:39:00.118917 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:39:00.118917 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 23:39:00.121059 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:39:00.121059 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:39:00.121059 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:39:00.124495 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:39:00.125520 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:39:00.126955 unknown[942]: wrote ssh authorized keys file for user: core Apr 17 23:39:00.128097 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:39:00.129115 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:39:00.129115 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 23:39:00.448106 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 23:39:00.555899 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:39:00.557629 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:39:00.557629 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:39:00.557629 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:39:00.557629 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:39:00.557629 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:39:00.557629 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:39:00.557629 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:39:00.557629 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:39:00.557629 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:39:00.557629 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:39:00.557629 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:39:00.557629 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:39:00.557629 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:39:00.557629 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 17 23:39:01.119096 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 17 23:39:01.470017 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:39:01.470017 ignition[942]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 17 23:39:01.474263 ignition[942]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:39:01.474263 ignition[942]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:39:01.474263 ignition[942]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 17 23:39:01.474263 ignition[942]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 17 23:39:01.474263 ignition[942]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 17 23:39:01.474263 ignition[942]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 17 23:39:01.474263 ignition[942]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 17 23:39:01.474263 ignition[942]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:39:01.474263 ignition[942]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:39:01.474263 ignition[942]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:39:01.474263 ignition[942]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:39:01.474263 ignition[942]: INFO : files: files passed Apr 17 23:39:01.474263 ignition[942]: INFO : Ignition finished successfully Apr 17 23:39:01.475017 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:39:01.505401 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:39:01.510302 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:39:01.513814 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:39:01.513938 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:39:01.528056 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:39:01.528056 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:39:01.530703 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:39:01.534316 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:39:01.536890 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:39:01.543359 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:39:01.567370 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:39:01.567488 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:39:01.569823 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:39:01.570928 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:39:01.572626 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:39:01.584321 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:39:01.597009 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:39:01.602318 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:39:01.611464 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:39:01.612310 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:39:01.613181 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:39:01.614761 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:39:01.614860 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:39:01.616816 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:39:01.617861 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:39:01.619264 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:39:01.620860 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:39:01.622346 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:39:01.623796 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:39:01.625563 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:39:01.627262 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:39:01.629084 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:39:01.630646 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:39:01.632224 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:39:01.632321 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:39:01.634285 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:39:01.635352 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:39:01.636730 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:39:01.636828 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:39:01.638221 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:39:01.638315 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:39:01.640573 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:39:01.640679 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:39:01.641783 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:39:01.641882 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:39:01.654307 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:39:01.658349 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:39:01.659085 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:39:01.659265 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:39:01.661449 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:39:01.661591 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:39:01.673590 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:39:01.673890 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:39:01.679012 ignition[995]: INFO : Ignition 2.19.0 Apr 17 23:39:01.679012 ignition[995]: INFO : Stage: umount Apr 17 23:39:01.679012 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:39:01.679012 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 23:39:01.679012 ignition[995]: INFO : umount: umount passed Apr 17 23:39:01.679012 ignition[995]: INFO : Ignition finished successfully Apr 17 23:39:01.679539 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:39:01.679661 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:39:01.683567 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:39:01.683623 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:39:01.687490 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:39:01.687543 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:39:01.688314 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 17 23:39:01.688367 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 17 23:39:01.689121 systemd[1]: Stopped target network.target - Network. Apr 17 23:39:01.689831 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:39:01.689887 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:39:01.692214 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:39:01.693245 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:39:01.697489 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:39:01.698706 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:39:01.722271 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:39:01.723844 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:39:01.723903 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:39:01.725512 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:39:01.725559 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:39:01.726944 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:39:01.727004 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:39:01.728593 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:39:01.728645 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:39:01.730484 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:39:01.732270 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:39:01.735067 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:39:01.735650 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:39:01.735766 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:39:01.736204 systemd-networkd[763]: eth0: DHCPv6 lease lost Apr 17 23:39:01.738746 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:39:01.738861 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:39:01.740372 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:39:01.740495 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:39:01.744639 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:39:01.744684 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:39:01.745811 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:39:01.745869 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:39:01.755052 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:39:01.756002 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:39:01.756059 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:39:01.757825 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:39:01.757877 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:39:01.759478 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:39:01.759529 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:39:01.761098 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:39:01.761147 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:39:01.762708 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:39:01.781357 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:39:01.781554 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:39:01.784499 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:39:01.784609 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:39:01.786256 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:39:01.786327 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:39:01.787981 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:39:01.788027 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:39:01.789558 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:39:01.789611 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:39:01.791785 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:39:01.791835 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:39:01.793386 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:39:01.793440 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:39:01.801636 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:39:01.802589 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:39:01.802647 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:39:01.803490 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 17 23:39:01.803543 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:39:01.809974 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:39:01.810028 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:39:01.810822 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:39:01.810872 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:39:01.812232 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:39:01.812336 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:39:01.813771 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:39:01.821370 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:39:01.829509 systemd[1]: Switching root. Apr 17 23:39:01.866082 systemd-journald[178]: Journal stopped Apr 17 23:39:03.060107 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Apr 17 23:39:03.060131 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 23:39:03.060143 kernel: SELinux: policy capability open_perms=1 Apr 17 23:39:03.060152 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 23:39:03.060176 kernel: SELinux: policy capability always_check_network=0 Apr 17 23:39:03.060186 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 23:39:03.060196 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 23:39:03.060205 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 23:39:03.060214 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 23:39:03.060223 kernel: audit: type=1403 audit(1776469142.006:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 23:39:03.060233 systemd[1]: Successfully loaded SELinux policy in 55.162ms. Apr 17 23:39:03.060247 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.868ms. Apr 17 23:39:03.060259 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:39:03.060269 systemd[1]: Detected virtualization kvm. Apr 17 23:39:03.060279 systemd[1]: Detected architecture x86-64. Apr 17 23:39:03.060289 systemd[1]: Detected first boot. Apr 17 23:39:03.060302 systemd[1]: Initializing machine ID from random generator. Apr 17 23:39:03.060312 zram_generator::config[1037]: No configuration found. Apr 17 23:39:03.060322 systemd[1]: Populated /etc with preset unit settings. Apr 17 23:39:03.060332 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 17 23:39:03.060342 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 17 23:39:03.060352 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 17 23:39:03.060363 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 23:39:03.060376 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 23:39:03.060386 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 23:39:03.060396 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 23:39:03.060406 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 23:39:03.060416 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 23:39:03.060427 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 23:39:03.060436 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 23:39:03.060450 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:39:03.060460 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:39:03.060471 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 23:39:03.060481 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 23:39:03.060491 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 23:39:03.060501 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:39:03.060510 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 23:39:03.060520 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:39:03.060533 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 17 23:39:03.060543 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 17 23:39:03.060556 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 17 23:39:03.060566 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 23:39:03.060576 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:39:03.060587 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:39:03.060597 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:39:03.060607 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:39:03.060620 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 23:39:03.060630 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 23:39:03.060640 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:39:03.060650 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:39:03.060660 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:39:03.060674 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 23:39:03.060684 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 23:39:03.060694 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 23:39:03.060704 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 23:39:03.060714 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:39:03.060725 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 23:39:03.060735 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 23:39:03.060745 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 23:39:03.060758 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 23:39:03.060768 systemd[1]: Reached target machines.target - Containers. Apr 17 23:39:03.060778 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 23:39:03.060788 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:39:03.060799 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:39:03.060810 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 23:39:03.060820 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:39:03.060830 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:39:03.060843 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:39:03.060853 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 23:39:03.060863 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:39:03.060874 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 23:39:03.060884 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 17 23:39:03.060895 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 17 23:39:03.060905 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 17 23:39:03.060915 systemd[1]: Stopped systemd-fsck-usr.service. Apr 17 23:39:03.060927 kernel: fuse: init (API version 7.39) Apr 17 23:39:03.060937 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:39:03.060947 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:39:03.060957 kernel: ACPI: bus type drm_connector registered Apr 17 23:39:03.060967 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 23:39:03.060977 kernel: loop: module loaded Apr 17 23:39:03.060987 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 23:39:03.060997 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:39:03.061007 systemd[1]: verity-setup.service: Deactivated successfully. Apr 17 23:39:03.061020 systemd[1]: Stopped verity-setup.service. Apr 17 23:39:03.061048 systemd-journald[1127]: Collecting audit messages is disabled. Apr 17 23:39:03.061071 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:39:03.061082 systemd-journald[1127]: Journal started Apr 17 23:39:03.061103 systemd-journald[1127]: Runtime Journal (/run/log/journal/243a8ad5f3d745f0901771830a446f60) is 8.0M, max 78.3M, 70.3M free. Apr 17 23:39:02.625818 systemd[1]: Queued start job for default target multi-user.target. Apr 17 23:39:02.641968 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 17 23:39:02.642502 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 17 23:39:03.066448 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:39:03.067974 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 23:39:03.068934 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 23:39:03.069960 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 23:39:03.070850 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 23:39:03.071802 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 23:39:03.072780 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 23:39:03.073999 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 23:39:03.075139 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:39:03.076452 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 23:39:03.076913 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 23:39:03.078286 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:39:03.078562 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:39:03.079925 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:39:03.080322 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:39:03.081896 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:39:03.082163 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:39:03.083937 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 23:39:03.084254 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 23:39:03.085470 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:39:03.085941 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:39:03.087383 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:39:03.088897 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 23:39:03.090311 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 23:39:03.129682 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 23:39:03.143105 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 23:39:03.153243 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 23:39:03.155101 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 23:39:03.155136 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:39:03.157908 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 17 23:39:03.165714 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 23:39:03.172300 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 23:39:03.173154 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:39:03.180681 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 23:39:03.187285 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 23:39:03.188115 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:39:03.190346 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 23:39:03.192397 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:39:03.199334 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:39:03.211371 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 23:39:03.221399 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:39:03.228575 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:39:03.230816 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 23:39:03.231687 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 23:39:03.235072 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 23:39:03.253261 systemd-journald[1127]: Time spent on flushing to /var/log/journal/243a8ad5f3d745f0901771830a446f60 is 89.789ms for 978 entries. Apr 17 23:39:03.253261 systemd-journald[1127]: System Journal (/var/log/journal/243a8ad5f3d745f0901771830a446f60) is 8.0M, max 195.6M, 187.6M free. Apr 17 23:39:03.378435 systemd-journald[1127]: Received client request to flush runtime journal. Apr 17 23:39:03.378485 kernel: loop0: detected capacity change from 0 to 142488 Apr 17 23:39:03.378809 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 23:39:03.256111 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 17 23:39:03.257443 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 23:39:03.259344 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 23:39:03.263503 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 17 23:39:03.314739 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:39:03.320368 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Apr 17 23:39:03.320382 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Apr 17 23:39:03.338125 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:39:03.355422 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 23:39:03.356046 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 17 23:39:03.404024 kernel: loop1: detected capacity change from 0 to 219192 Apr 17 23:39:03.368852 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 23:39:03.370825 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 17 23:39:03.389045 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 23:39:03.451345 kernel: loop2: detected capacity change from 0 to 140768 Apr 17 23:39:03.458105 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 23:39:03.470295 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:39:03.495689 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Apr 17 23:39:03.496413 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Apr 17 23:39:03.507040 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:39:03.513324 kernel: loop3: detected capacity change from 0 to 8 Apr 17 23:39:03.542260 kernel: loop4: detected capacity change from 0 to 142488 Apr 17 23:39:03.573733 kernel: loop5: detected capacity change from 0 to 219192 Apr 17 23:39:03.600206 kernel: loop6: detected capacity change from 0 to 140768 Apr 17 23:39:03.624198 kernel: loop7: detected capacity change from 0 to 8 Apr 17 23:39:03.628043 (sd-merge)[1185]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Apr 17 23:39:03.628737 (sd-merge)[1185]: Merged extensions into '/usr'. Apr 17 23:39:03.634494 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 23:39:03.635042 systemd[1]: Reloading... Apr 17 23:39:03.707193 zram_generator::config[1211]: No configuration found. Apr 17 23:39:03.770888 ldconfig[1152]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 23:39:03.881341 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:39:03.924251 systemd[1]: Reloading finished in 288 ms. Apr 17 23:39:03.957636 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 23:39:03.959321 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 23:39:03.960710 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 23:39:03.973313 systemd[1]: Starting ensure-sysext.service... Apr 17 23:39:03.975681 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:39:03.978334 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:39:03.992338 systemd[1]: Reloading requested from client PID 1255 ('systemctl') (unit ensure-sysext.service)... Apr 17 23:39:03.992354 systemd[1]: Reloading... Apr 17 23:39:04.025864 systemd-udevd[1257]: Using default interface naming scheme 'v255'. Apr 17 23:39:04.034611 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 23:39:04.035272 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 23:39:04.037777 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 23:39:04.038119 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Apr 17 23:39:04.038281 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Apr 17 23:39:04.046526 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:39:04.046605 systemd-tmpfiles[1256]: Skipping /boot Apr 17 23:39:04.084194 zram_generator::config[1282]: No configuration found. Apr 17 23:39:04.087997 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:39:04.088238 systemd-tmpfiles[1256]: Skipping /boot Apr 17 23:39:04.283216 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1309) Apr 17 23:39:04.282688 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:39:04.325433 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 17 23:39:04.356184 kernel: ACPI: button: Power Button [PWRF] Apr 17 23:39:04.366870 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 17 23:39:04.365514 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 17 23:39:04.366156 systemd[1]: Reloading finished in 373 ms. Apr 17 23:39:04.386052 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:39:04.388686 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:39:04.435192 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 17 23:39:04.443865 kernel: EDAC MC: Ver: 3.0.0 Apr 17 23:39:04.443912 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 17 23:39:04.444110 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 23:39:04.444125 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 17 23:39:04.469142 systemd[1]: Finished ensure-sysext.service. Apr 17 23:39:04.474654 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 17 23:39:04.475579 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:39:04.481325 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:39:04.486304 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 23:39:04.487229 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:39:04.490359 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:39:04.499351 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:39:04.504109 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:39:04.507644 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:39:04.509935 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:39:04.518093 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 23:39:04.521672 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 23:39:04.528251 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:39:04.538378 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:39:04.542065 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 17 23:39:04.545070 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 23:39:04.551275 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:39:04.553290 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:39:04.554299 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 17 23:39:04.566433 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:39:04.566629 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:39:04.568587 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:39:04.569187 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:39:04.570819 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:39:04.571370 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:39:04.572651 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:39:04.573195 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:39:04.594418 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 17 23:39:04.596262 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:39:04.596512 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:39:04.603530 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 23:39:04.614249 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 23:39:04.616459 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 23:39:04.640872 lvm[1391]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:39:04.649184 augenrules[1399]: No rules Apr 17 23:39:04.649670 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:39:04.650905 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 23:39:04.655314 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 23:39:04.657059 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:39:04.667924 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 23:39:04.674873 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 23:39:04.681621 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 17 23:39:04.685546 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:39:04.705334 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 17 23:39:04.707926 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 23:39:04.820788 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:39:04.824094 systemd-networkd[1376]: lo: Link UP Apr 17 23:39:04.824301 systemd-networkd[1376]: lo: Gained carrier Apr 17 23:39:04.825308 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 17 23:39:04.826760 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 23:39:04.827561 systemd-networkd[1376]: Enumeration completed Apr 17 23:39:04.827952 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:39:04.827994 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:39:04.828000 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:39:04.830853 systemd-networkd[1376]: eth0: Link UP Apr 17 23:39:04.831251 systemd-timesyncd[1379]: No network connectivity, watching for changes. Apr 17 23:39:04.831417 systemd-networkd[1376]: eth0: Gained carrier Apr 17 23:39:04.831431 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:39:04.833397 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 23:39:04.835588 lvm[1415]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:39:04.838803 systemd-resolved[1377]: Positive Trust Anchors: Apr 17 23:39:04.838823 systemd-resolved[1377]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:39:04.838853 systemd-resolved[1377]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:39:04.844895 systemd-resolved[1377]: Defaulting to hostname 'linux'. Apr 17 23:39:04.847055 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:39:04.849325 systemd[1]: Reached target network.target - Network. Apr 17 23:39:04.852223 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:39:04.853186 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:39:04.854006 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 23:39:04.854855 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 23:39:04.855838 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 23:39:04.856721 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 23:39:04.857530 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 23:39:04.858334 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 23:39:04.858368 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:39:04.859060 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:39:04.860350 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 23:39:04.863303 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 23:39:04.872625 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 23:39:04.874678 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 17 23:39:04.875690 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 23:39:04.877143 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:39:04.877900 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:39:04.878682 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:39:04.878721 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:39:04.883253 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 23:39:04.886354 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 17 23:39:04.891749 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 23:39:04.894272 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 23:39:04.900410 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 23:39:04.902298 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 23:39:04.910354 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 23:39:04.923283 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 23:39:04.928364 jq[1429]: false Apr 17 23:39:04.929147 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 23:39:04.939315 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 23:39:04.945057 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 23:39:04.947007 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 23:39:04.947533 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 23:39:04.950412 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 23:39:04.953488 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 23:39:04.961555 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 23:39:04.961926 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 23:39:04.964224 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 23:39:04.964472 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 23:39:04.983726 dbus-daemon[1428]: [system] SELinux support is enabled Apr 17 23:39:04.983900 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 23:39:04.986907 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 23:39:04.986945 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 23:39:04.989707 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 23:39:04.989737 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 23:39:04.990459 update_engine[1439]: I20260417 23:39:04.990180 1439 main.cc:92] Flatcar Update Engine starting Apr 17 23:39:04.999185 jq[1441]: true Apr 17 23:39:05.009184 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 23:39:05.009438 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 23:39:05.012194 systemd[1]: Started update-engine.service - Update Engine. Apr 17 23:39:05.019546 extend-filesystems[1430]: Found loop4 Apr 17 23:39:05.019546 extend-filesystems[1430]: Found loop5 Apr 17 23:39:05.019546 extend-filesystems[1430]: Found loop6 Apr 17 23:39:05.019546 extend-filesystems[1430]: Found loop7 Apr 17 23:39:05.019546 extend-filesystems[1430]: Found sda Apr 17 23:39:05.019546 extend-filesystems[1430]: Found sda1 Apr 17 23:39:05.019546 extend-filesystems[1430]: Found sda2 Apr 17 23:39:05.019546 extend-filesystems[1430]: Found sda3 Apr 17 23:39:05.019546 extend-filesystems[1430]: Found usr Apr 17 23:39:05.019546 extend-filesystems[1430]: Found sda4 Apr 17 23:39:05.019546 extend-filesystems[1430]: Found sda6 Apr 17 23:39:05.019546 extend-filesystems[1430]: Found sda7 Apr 17 23:39:05.019546 extend-filesystems[1430]: Found sda9 Apr 17 23:39:05.019546 extend-filesystems[1430]: Checking size of /dev/sda9 Apr 17 23:39:05.020572 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 23:39:05.052140 update_engine[1439]: I20260417 23:39:05.022444 1439 update_check_scheduler.cc:74] Next update check in 9m43s Apr 17 23:39:05.052199 tar[1444]: linux-amd64/LICENSE Apr 17 23:39:05.052199 tar[1444]: linux-amd64/helm Apr 17 23:39:05.039514 (ntainerd)[1456]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 23:39:05.062194 jq[1455]: true Apr 17 23:39:05.064398 extend-filesystems[1430]: Resized partition /dev/sda9 Apr 17 23:39:05.066859 extend-filesystems[1468]: resize2fs 1.47.1 (20-May-2024) Apr 17 23:39:05.074006 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Apr 17 23:39:05.146295 coreos-metadata[1427]: Apr 17 23:39:05.146 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Apr 17 23:39:05.151929 locksmithd[1460]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 23:39:05.228149 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1286) Apr 17 23:39:05.227457 systemd-logind[1438]: Watching system buttons on /dev/input/event1 (Power Button) Apr 17 23:39:05.227479 systemd-logind[1438]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 23:39:05.230228 systemd-logind[1438]: New seat seat0. Apr 17 23:39:05.232991 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 23:39:05.239591 bash[1490]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:39:05.241414 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 23:39:05.251413 systemd[1]: Starting sshkeys.service... Apr 17 23:39:05.334337 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 17 23:39:05.342852 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 17 23:39:05.383192 containerd[1456]: time="2026-04-17T23:39:05.382231880Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 17 23:39:05.402262 coreos-metadata[1498]: Apr 17 23:39:05.402 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Apr 17 23:39:05.419178 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Apr 17 23:39:05.427580 containerd[1456]: time="2026-04-17T23:39:05.425808140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:39:05.428924 extend-filesystems[1468]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 17 23:39:05.428924 extend-filesystems[1468]: old_desc_blocks = 1, new_desc_blocks = 10 Apr 17 23:39:05.428924 extend-filesystems[1468]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Apr 17 23:39:05.436722 extend-filesystems[1430]: Resized filesystem in /dev/sda9 Apr 17 23:39:05.437910 containerd[1456]: time="2026-04-17T23:39:05.430997380Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:39:05.437910 containerd[1456]: time="2026-04-17T23:39:05.431019660Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 17 23:39:05.437910 containerd[1456]: time="2026-04-17T23:39:05.431034160Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 17 23:39:05.437910 containerd[1456]: time="2026-04-17T23:39:05.431234900Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 17 23:39:05.437910 containerd[1456]: time="2026-04-17T23:39:05.431251390Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 17 23:39:05.437910 containerd[1456]: time="2026-04-17T23:39:05.431316800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:39:05.437910 containerd[1456]: time="2026-04-17T23:39:05.431328620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:39:05.437910 containerd[1456]: time="2026-04-17T23:39:05.431498120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:39:05.437910 containerd[1456]: time="2026-04-17T23:39:05.431512360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 17 23:39:05.437910 containerd[1456]: time="2026-04-17T23:39:05.431524520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:39:05.437910 containerd[1456]: time="2026-04-17T23:39:05.431533360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 17 23:39:05.429992 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 23:39:05.439248 containerd[1456]: time="2026-04-17T23:39:05.431619910Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:39:05.439248 containerd[1456]: time="2026-04-17T23:39:05.431848810Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:39:05.439248 containerd[1456]: time="2026-04-17T23:39:05.431954740Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:39:05.439248 containerd[1456]: time="2026-04-17T23:39:05.431966820Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 17 23:39:05.439248 containerd[1456]: time="2026-04-17T23:39:05.432063090Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 17 23:39:05.439248 containerd[1456]: time="2026-04-17T23:39:05.432115900Z" level=info msg="metadata content store policy set" policy=shared Apr 17 23:39:05.439248 containerd[1456]: time="2026-04-17T23:39:05.438976020Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 17 23:39:05.439248 containerd[1456]: time="2026-04-17T23:39:05.439010790Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 17 23:39:05.439248 containerd[1456]: time="2026-04-17T23:39:05.439025900Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 17 23:39:05.439248 containerd[1456]: time="2026-04-17T23:39:05.439040410Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 17 23:39:05.439248 containerd[1456]: time="2026-04-17T23:39:05.439058130Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 17 23:39:05.439248 containerd[1456]: time="2026-04-17T23:39:05.439203360Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 17 23:39:05.430270 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 23:39:05.439481 containerd[1456]: time="2026-04-17T23:39:05.439373100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 17 23:39:05.439502 containerd[1456]: time="2026-04-17T23:39:05.439477380Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 17 23:39:05.439502 containerd[1456]: time="2026-04-17T23:39:05.439493120Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 17 23:39:05.439535 containerd[1456]: time="2026-04-17T23:39:05.439506040Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 17 23:39:05.439535 containerd[1456]: time="2026-04-17T23:39:05.439518540Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 17 23:39:05.439535 containerd[1456]: time="2026-04-17T23:39:05.439531020Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 17 23:39:05.439586 containerd[1456]: time="2026-04-17T23:39:05.439541500Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 17 23:39:05.439586 containerd[1456]: time="2026-04-17T23:39:05.439554130Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 17 23:39:05.439586 containerd[1456]: time="2026-04-17T23:39:05.439566710Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 17 23:39:05.439586 containerd[1456]: time="2026-04-17T23:39:05.439578570Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 17 23:39:05.439647 containerd[1456]: time="2026-04-17T23:39:05.439589560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 17 23:39:05.439647 containerd[1456]: time="2026-04-17T23:39:05.439599900Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 17 23:39:05.439647 containerd[1456]: time="2026-04-17T23:39:05.439624200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 17 23:39:05.439647 containerd[1456]: time="2026-04-17T23:39:05.439636610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 17 23:39:05.439647 containerd[1456]: time="2026-04-17T23:39:05.439647200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 17 23:39:05.439732 containerd[1456]: time="2026-04-17T23:39:05.439659190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 17 23:39:05.439732 containerd[1456]: time="2026-04-17T23:39:05.439671000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 17 23:39:05.439732 containerd[1456]: time="2026-04-17T23:39:05.439682800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 17 23:39:05.439732 containerd[1456]: time="2026-04-17T23:39:05.439693050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 17 23:39:05.439732 containerd[1456]: time="2026-04-17T23:39:05.439703910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 17 23:39:05.439732 containerd[1456]: time="2026-04-17T23:39:05.439715220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 17 23:39:05.439732 containerd[1456]: time="2026-04-17T23:39:05.439727990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 17 23:39:05.439838 containerd[1456]: time="2026-04-17T23:39:05.439738080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 17 23:39:05.439838 containerd[1456]: time="2026-04-17T23:39:05.439749410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 17 23:39:05.439838 containerd[1456]: time="2026-04-17T23:39:05.439759960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 17 23:39:05.439838 containerd[1456]: time="2026-04-17T23:39:05.439773100Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 17 23:39:05.439838 containerd[1456]: time="2026-04-17T23:39:05.439795070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 17 23:39:05.439838 containerd[1456]: time="2026-04-17T23:39:05.439805600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 17 23:39:05.439838 containerd[1456]: time="2026-04-17T23:39:05.439814960Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 17 23:39:05.439948 containerd[1456]: time="2026-04-17T23:39:05.439852540Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 17 23:39:05.439948 containerd[1456]: time="2026-04-17T23:39:05.439866390Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 17 23:39:05.439948 containerd[1456]: time="2026-04-17T23:39:05.439875960Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 17 23:39:05.439948 containerd[1456]: time="2026-04-17T23:39:05.439885800Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 17 23:39:05.439948 containerd[1456]: time="2026-04-17T23:39:05.439894160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 17 23:39:05.439948 containerd[1456]: time="2026-04-17T23:39:05.439904840Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 17 23:39:05.439948 containerd[1456]: time="2026-04-17T23:39:05.439918480Z" level=info msg="NRI interface is disabled by configuration." Apr 17 23:39:05.439948 containerd[1456]: time="2026-04-17T23:39:05.439928640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 17 23:39:05.443192 containerd[1456]: time="2026-04-17T23:39:05.440138520Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 17 23:39:05.443192 containerd[1456]: time="2026-04-17T23:39:05.441746590Z" level=info msg="Connect containerd service" Apr 17 23:39:05.443192 containerd[1456]: time="2026-04-17T23:39:05.441818920Z" level=info msg="using legacy CRI server" Apr 17 23:39:05.443192 containerd[1456]: time="2026-04-17T23:39:05.441829570Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 23:39:05.443192 containerd[1456]: time="2026-04-17T23:39:05.441915950Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 17 23:39:05.443622 containerd[1456]: time="2026-04-17T23:39:05.443594380Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:39:05.444269 containerd[1456]: time="2026-04-17T23:39:05.444076120Z" level=info msg="Start subscribing containerd event" Apr 17 23:39:05.444295 containerd[1456]: time="2026-04-17T23:39:05.444276580Z" level=info msg="Start recovering state" Apr 17 23:39:05.444800 containerd[1456]: time="2026-04-17T23:39:05.444606700Z" level=info msg="Start event monitor" Apr 17 23:39:05.444847 containerd[1456]: time="2026-04-17T23:39:05.444826330Z" level=info msg="Start snapshots syncer" Apr 17 23:39:05.444847 containerd[1456]: time="2026-04-17T23:39:05.444844450Z" level=info msg="Start cni network conf syncer for default" Apr 17 23:39:05.444893 containerd[1456]: time="2026-04-17T23:39:05.444853740Z" level=info msg="Start streaming server" Apr 17 23:39:05.445317 containerd[1456]: time="2026-04-17T23:39:05.444708580Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 23:39:05.445642 containerd[1456]: time="2026-04-17T23:39:05.445620470Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 23:39:05.446020 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 23:39:05.446849 containerd[1456]: time="2026-04-17T23:39:05.446825950Z" level=info msg="containerd successfully booted in 0.066781s" Apr 17 23:39:05.454494 sshd_keygen[1465]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 23:39:05.483606 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 23:39:05.496303 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 23:39:05.503813 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 23:39:05.504030 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 23:39:05.512800 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 23:39:05.524449 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 23:39:05.533037 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 23:39:05.538404 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 23:39:05.565522 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 23:39:05.613241 systemd-networkd[1376]: eth0: DHCPv4 address 172.238.189.76/24, gateway 172.238.189.1 acquired from 23.205.167.174 Apr 17 23:39:05.613712 dbus-daemon[1428]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1376 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 17 23:39:05.616572 systemd-timesyncd[1379]: Network configuration changed, trying to establish connection. Apr 17 23:39:05.629114 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 17 23:39:05.690155 dbus-daemon[1428]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 17 23:39:05.690398 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 17 23:39:05.692207 dbus-daemon[1428]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1524 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 17 23:39:05.703075 systemd[1]: Starting polkit.service - Authorization Manager... Apr 17 23:39:05.716418 polkitd[1525]: Started polkitd version 121 Apr 17 23:39:05.720980 polkitd[1525]: Loading rules from directory /etc/polkit-1/rules.d Apr 17 23:39:05.721249 polkitd[1525]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 17 23:39:05.723342 polkitd[1525]: Finished loading, compiling and executing 2 rules Apr 17 23:39:05.725527 dbus-daemon[1428]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 17 23:39:05.725827 systemd[1]: Started polkit.service - Authorization Manager. Apr 17 23:39:05.726822 polkitd[1525]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 17 23:39:05.738091 systemd-hostnamed[1524]: Hostname set to <172-238-189-76> (transient) Apr 17 23:39:05.740224 systemd-resolved[1377]: System hostname changed to '172-238-189-76'. Apr 17 23:39:05.810009 tar[1444]: linux-amd64/README.md Apr 17 23:39:05.821617 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 23:39:06.785736 systemd-timesyncd[1379]: Contacted time server 199.127.61.186:123 (2.flatcar.pool.ntp.org). Apr 17 23:39:06.785799 systemd-timesyncd[1379]: Initial clock synchronization to Fri 2026-04-17 23:39:06.785551 UTC. Apr 17 23:39:06.786546 systemd-resolved[1377]: Clock change detected. Flushing caches. Apr 17 23:39:07.098805 coreos-metadata[1427]: Apr 17 23:39:07.098 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Apr 17 23:39:07.188520 coreos-metadata[1427]: Apr 17 23:39:07.188 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Apr 17 23:39:07.354050 coreos-metadata[1498]: Apr 17 23:39:07.353 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Apr 17 23:39:07.373330 coreos-metadata[1427]: Apr 17 23:39:07.373 INFO Fetch successful Apr 17 23:39:07.373430 coreos-metadata[1427]: Apr 17 23:39:07.373 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Apr 17 23:39:07.452912 coreos-metadata[1498]: Apr 17 23:39:07.452 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Apr 17 23:39:07.532859 systemd-networkd[1376]: eth0: Gained IPv6LL Apr 17 23:39:07.537252 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 23:39:07.539232 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 23:39:07.548038 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:39:07.552943 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 23:39:07.581810 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 23:39:07.588651 coreos-metadata[1498]: Apr 17 23:39:07.588 INFO Fetch successful Apr 17 23:39:07.603647 update-ssh-keys[1553]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:39:07.604152 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 17 23:39:07.606670 systemd[1]: Finished sshkeys.service. Apr 17 23:39:07.703337 coreos-metadata[1427]: Apr 17 23:39:07.702 INFO Fetch successful Apr 17 23:39:07.807747 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 17 23:39:07.809356 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 23:39:08.446358 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:39:08.447533 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 23:39:08.453807 systemd[1]: Startup finished in 1.039s (kernel) + 8.284s (initrd) + 5.560s (userspace) = 14.884s. Apr 17 23:39:08.485261 (kubelet)[1580]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:39:08.916040 kubelet[1580]: E0417 23:39:08.915925 1580 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:39:08.919847 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:39:08.920047 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:39:10.090562 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 23:39:10.099923 systemd[1]: Started sshd@0-172.238.189.76:22-50.85.169.122:46650.service - OpenSSH per-connection server daemon (50.85.169.122:46650). Apr 17 23:39:10.710449 sshd[1591]: Accepted publickey for core from 50.85.169.122 port 46650 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 17 23:39:10.712831 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:10.720786 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 23:39:10.737027 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 23:39:10.738939 systemd-logind[1438]: New session 1 of user core. Apr 17 23:39:10.750131 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 23:39:10.756918 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 23:39:10.773367 (systemd)[1595]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 23:39:10.870081 systemd[1595]: Queued start job for default target default.target. Apr 17 23:39:10.883419 systemd[1595]: Created slice app.slice - User Application Slice. Apr 17 23:39:10.883449 systemd[1595]: Reached target paths.target - Paths. Apr 17 23:39:10.883463 systemd[1595]: Reached target timers.target - Timers. Apr 17 23:39:10.885115 systemd[1595]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 23:39:10.898587 systemd[1595]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 23:39:10.898731 systemd[1595]: Reached target sockets.target - Sockets. Apr 17 23:39:10.898748 systemd[1595]: Reached target basic.target - Basic System. Apr 17 23:39:10.898787 systemd[1595]: Reached target default.target - Main User Target. Apr 17 23:39:10.898824 systemd[1595]: Startup finished in 119ms. Apr 17 23:39:10.899094 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 23:39:10.908821 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 23:39:11.349553 systemd[1]: Started sshd@1-172.238.189.76:22-50.85.169.122:46664.service - OpenSSH per-connection server daemon (50.85.169.122:46664). Apr 17 23:39:11.950929 sshd[1606]: Accepted publickey for core from 50.85.169.122 port 46664 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 17 23:39:11.952741 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:11.957828 systemd-logind[1438]: New session 2 of user core. Apr 17 23:39:11.962833 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 23:39:12.381214 sshd[1606]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:12.385480 systemd[1]: sshd@1-172.238.189.76:22-50.85.169.122:46664.service: Deactivated successfully. Apr 17 23:39:12.387612 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 23:39:12.388180 systemd-logind[1438]: Session 2 logged out. Waiting for processes to exit. Apr 17 23:39:12.389170 systemd-logind[1438]: Removed session 2. Apr 17 23:39:12.486590 systemd[1]: Started sshd@2-172.238.189.76:22-50.85.169.122:46666.service - OpenSSH per-connection server daemon (50.85.169.122:46666). Apr 17 23:39:13.086134 sshd[1613]: Accepted publickey for core from 50.85.169.122 port 46666 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 17 23:39:13.087899 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:13.092927 systemd-logind[1438]: New session 3 of user core. Apr 17 23:39:13.099842 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 23:39:13.509643 sshd[1613]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:13.514096 systemd[1]: sshd@2-172.238.189.76:22-50.85.169.122:46666.service: Deactivated successfully. Apr 17 23:39:13.515835 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 23:39:13.516571 systemd-logind[1438]: Session 3 logged out. Waiting for processes to exit. Apr 17 23:39:13.517429 systemd-logind[1438]: Removed session 3. Apr 17 23:39:13.614058 systemd[1]: Started sshd@3-172.238.189.76:22-50.85.169.122:46672.service - OpenSSH per-connection server daemon (50.85.169.122:46672). Apr 17 23:39:14.212349 sshd[1620]: Accepted publickey for core from 50.85.169.122 port 46672 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 17 23:39:14.214595 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:14.219448 systemd-logind[1438]: New session 4 of user core. Apr 17 23:39:14.228813 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 23:39:14.642609 sshd[1620]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:14.646113 systemd[1]: sshd@3-172.238.189.76:22-50.85.169.122:46672.service: Deactivated successfully. Apr 17 23:39:14.648556 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 23:39:14.649838 systemd-logind[1438]: Session 4 logged out. Waiting for processes to exit. Apr 17 23:39:14.651063 systemd-logind[1438]: Removed session 4. Apr 17 23:39:14.748877 systemd[1]: Started sshd@4-172.238.189.76:22-50.85.169.122:46676.service - OpenSSH per-connection server daemon (50.85.169.122:46676). Apr 17 23:39:15.354171 sshd[1627]: Accepted publickey for core from 50.85.169.122 port 46676 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 17 23:39:15.361075 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:15.370781 systemd-logind[1438]: New session 5 of user core. Apr 17 23:39:15.382845 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 23:39:15.707359 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 23:39:15.707968 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:39:15.725896 sudo[1630]: pam_unix(sudo:session): session closed for user root Apr 17 23:39:15.822150 sshd[1627]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:15.826780 systemd[1]: sshd@4-172.238.189.76:22-50.85.169.122:46676.service: Deactivated successfully. Apr 17 23:39:15.829035 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 23:39:15.829829 systemd-logind[1438]: Session 5 logged out. Waiting for processes to exit. Apr 17 23:39:15.831060 systemd-logind[1438]: Removed session 5. Apr 17 23:39:15.926410 systemd[1]: Started sshd@5-172.238.189.76:22-50.85.169.122:46688.service - OpenSSH per-connection server daemon (50.85.169.122:46688). Apr 17 23:39:16.525129 sshd[1635]: Accepted publickey for core from 50.85.169.122 port 46688 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 17 23:39:16.526604 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:16.531368 systemd-logind[1438]: New session 6 of user core. Apr 17 23:39:16.542830 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 23:39:16.859215 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 23:39:16.859564 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:39:16.863098 sudo[1639]: pam_unix(sudo:session): session closed for user root Apr 17 23:39:16.868602 sudo[1638]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 17 23:39:16.868953 sudo[1638]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:39:16.881108 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 17 23:39:16.883190 auditctl[1642]: No rules Apr 17 23:39:16.883673 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 23:39:16.883900 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 17 23:39:16.886060 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:39:16.913550 augenrules[1660]: No rules Apr 17 23:39:16.915137 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:39:16.916328 sudo[1638]: pam_unix(sudo:session): session closed for user root Apr 17 23:39:17.012364 sshd[1635]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:17.016069 systemd[1]: sshd@5-172.238.189.76:22-50.85.169.122:46688.service: Deactivated successfully. Apr 17 23:39:17.018243 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 23:39:17.018825 systemd-logind[1438]: Session 6 logged out. Waiting for processes to exit. Apr 17 23:39:17.019637 systemd-logind[1438]: Removed session 6. Apr 17 23:39:17.119204 systemd[1]: Started sshd@6-172.238.189.76:22-50.85.169.122:46702.service - OpenSSH per-connection server daemon (50.85.169.122:46702). Apr 17 23:39:17.716535 sshd[1668]: Accepted publickey for core from 50.85.169.122 port 46702 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 17 23:39:17.718255 sshd[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:17.722751 systemd-logind[1438]: New session 7 of user core. Apr 17 23:39:17.730008 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 23:39:18.050018 sudo[1671]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 23:39:18.050939 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:39:18.298894 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 23:39:18.299068 (dockerd)[1688]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 23:39:18.543031 dockerd[1688]: time="2026-04-17T23:39:18.542078294Z" level=info msg="Starting up" Apr 17 23:39:18.608299 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1057736616-merged.mount: Deactivated successfully. Apr 17 23:39:18.615954 systemd[1]: var-lib-docker-metacopy\x2dcheck3554934967-merged.mount: Deactivated successfully. Apr 17 23:39:18.637963 dockerd[1688]: time="2026-04-17T23:39:18.637934004Z" level=info msg="Loading containers: start." Apr 17 23:39:18.748734 kernel: Initializing XFRM netlink socket Apr 17 23:39:18.843916 systemd-networkd[1376]: docker0: Link UP Apr 17 23:39:18.856927 dockerd[1688]: time="2026-04-17T23:39:18.856880774Z" level=info msg="Loading containers: done." Apr 17 23:39:18.874797 dockerd[1688]: time="2026-04-17T23:39:18.874412104Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 23:39:18.876832 dockerd[1688]: time="2026-04-17T23:39:18.876781384Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 17 23:39:18.876962 dockerd[1688]: time="2026-04-17T23:39:18.876934934Z" level=info msg="Daemon has completed initialization" Apr 17 23:39:18.908304 dockerd[1688]: time="2026-04-17T23:39:18.908238134Z" level=info msg="API listen on /run/docker.sock" Apr 17 23:39:18.908596 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 23:39:19.170308 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 23:39:19.177860 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:39:19.341837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:39:19.344848 (kubelet)[1831]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:39:19.381617 kubelet[1831]: E0417 23:39:19.381561 1831 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:39:19.387308 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:39:19.387515 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:39:19.678910 containerd[1456]: time="2026-04-17T23:39:19.678857324Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 17 23:39:20.235885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1760732579.mount: Deactivated successfully. Apr 17 23:39:21.361741 containerd[1456]: time="2026-04-17T23:39:21.361077124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:21.362130 containerd[1456]: time="2026-04-17T23:39:21.362024564Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27100520" Apr 17 23:39:21.362734 containerd[1456]: time="2026-04-17T23:39:21.362511474Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:21.366405 containerd[1456]: time="2026-04-17T23:39:21.365187924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:21.366405 containerd[1456]: time="2026-04-17T23:39:21.366230044Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 1.68733769s" Apr 17 23:39:21.366405 containerd[1456]: time="2026-04-17T23:39:21.366258464Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 17 23:39:21.367294 containerd[1456]: time="2026-04-17T23:39:21.367273694Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 17 23:39:22.462733 containerd[1456]: time="2026-04-17T23:39:22.462672064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:22.463670 containerd[1456]: time="2026-04-17T23:39:22.463638854Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252744" Apr 17 23:39:22.464454 containerd[1456]: time="2026-04-17T23:39:22.464171754Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:22.466519 containerd[1456]: time="2026-04-17T23:39:22.466495864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:22.467498 containerd[1456]: time="2026-04-17T23:39:22.467465814Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 1.10010141s" Apr 17 23:39:22.467554 containerd[1456]: time="2026-04-17T23:39:22.467496704Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 17 23:39:22.468020 containerd[1456]: time="2026-04-17T23:39:22.467992524Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 17 23:39:23.480940 containerd[1456]: time="2026-04-17T23:39:23.480874974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:23.481918 containerd[1456]: time="2026-04-17T23:39:23.481876264Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810897" Apr 17 23:39:23.483726 containerd[1456]: time="2026-04-17T23:39:23.482416864Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:23.485461 containerd[1456]: time="2026-04-17T23:39:23.485420494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:23.487225 containerd[1456]: time="2026-04-17T23:39:23.486970804Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 1.0189468s" Apr 17 23:39:23.487225 containerd[1456]: time="2026-04-17T23:39:23.487013504Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 17 23:39:23.487509 containerd[1456]: time="2026-04-17T23:39:23.487478014Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 17 23:39:24.479217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1127132883.mount: Deactivated successfully. Apr 17 23:39:24.726588 containerd[1456]: time="2026-04-17T23:39:24.726325614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:24.727775 containerd[1456]: time="2026-04-17T23:39:24.727424854Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972960" Apr 17 23:39:24.728730 containerd[1456]: time="2026-04-17T23:39:24.728228254Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:24.731177 containerd[1456]: time="2026-04-17T23:39:24.730361264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:24.731177 containerd[1456]: time="2026-04-17T23:39:24.731014724Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 1.2434989s" Apr 17 23:39:24.731177 containerd[1456]: time="2026-04-17T23:39:24.731040254Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 17 23:39:24.732203 containerd[1456]: time="2026-04-17T23:39:24.732176004Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 17 23:39:25.280732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2296274257.mount: Deactivated successfully. Apr 17 23:39:26.137464 containerd[1456]: time="2026-04-17T23:39:26.137417014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:26.138662 containerd[1456]: time="2026-04-17T23:39:26.138605814Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388013" Apr 17 23:39:26.139428 containerd[1456]: time="2026-04-17T23:39:26.139064984Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:26.141888 containerd[1456]: time="2026-04-17T23:39:26.141849474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:26.143215 containerd[1456]: time="2026-04-17T23:39:26.143182154Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.41097674s" Apr 17 23:39:26.143260 containerd[1456]: time="2026-04-17T23:39:26.143216094Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 17 23:39:26.144799 containerd[1456]: time="2026-04-17T23:39:26.144765044Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 17 23:39:26.619420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1417377063.mount: Deactivated successfully. Apr 17 23:39:26.624191 containerd[1456]: time="2026-04-17T23:39:26.624146934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:26.625012 containerd[1456]: time="2026-04-17T23:39:26.624876314Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321224" Apr 17 23:39:26.628636 containerd[1456]: time="2026-04-17T23:39:26.628602854Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:26.631673 containerd[1456]: time="2026-04-17T23:39:26.631632134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:26.632727 containerd[1456]: time="2026-04-17T23:39:26.632402374Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 487.60694ms" Apr 17 23:39:26.632727 containerd[1456]: time="2026-04-17T23:39:26.632429044Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 17 23:39:26.633296 containerd[1456]: time="2026-04-17T23:39:26.633144144Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 17 23:39:27.161065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount752010764.mount: Deactivated successfully. Apr 17 23:39:27.872974 containerd[1456]: time="2026-04-17T23:39:27.872891774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:27.874399 containerd[1456]: time="2026-04-17T23:39:27.874258014Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874823" Apr 17 23:39:27.874399 containerd[1456]: time="2026-04-17T23:39:27.874361794Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:27.878718 containerd[1456]: time="2026-04-17T23:39:27.877923664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:27.882522 containerd[1456]: time="2026-04-17T23:39:27.882470634Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.24929992s" Apr 17 23:39:27.882571 containerd[1456]: time="2026-04-17T23:39:27.882522714Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 17 23:39:29.551611 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 17 23:39:29.560852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:39:29.733842 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:39:29.738097 (kubelet)[2059]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:39:29.773693 kubelet[2059]: E0417 23:39:29.773654 2059 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:39:29.777340 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:39:29.777734 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:39:30.560304 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:39:30.564899 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:39:30.595144 systemd[1]: Reloading requested from client PID 2073 ('systemctl') (unit session-7.scope)... Apr 17 23:39:30.595283 systemd[1]: Reloading... Apr 17 23:39:30.734714 zram_generator::config[2113]: No configuration found. Apr 17 23:39:30.843965 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:39:30.917816 systemd[1]: Reloading finished in 322 ms. Apr 17 23:39:30.967340 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 17 23:39:30.967443 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 17 23:39:30.967742 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:39:30.970943 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:39:31.132162 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:39:31.136606 (kubelet)[2167]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:39:31.172741 kubelet[2167]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:39:31.172741 kubelet[2167]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:39:31.172741 kubelet[2167]: I0417 23:39:31.171838 2167 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:39:31.408122 kubelet[2167]: I0417 23:39:31.408050 2167 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 17 23:39:31.408122 kubelet[2167]: I0417 23:39:31.408070 2167 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:39:31.410163 kubelet[2167]: I0417 23:39:31.410142 2167 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 23:39:31.410202 kubelet[2167]: I0417 23:39:31.410165 2167 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:39:31.410516 kubelet[2167]: I0417 23:39:31.410496 2167 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:39:31.414987 kubelet[2167]: E0417 23:39:31.414963 2167 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.238.189.76:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.238.189.76:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:39:31.415481 kubelet[2167]: I0417 23:39:31.415371 2167 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:39:31.419865 kubelet[2167]: E0417 23:39:31.419833 2167 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:39:31.420157 kubelet[2167]: I0417 23:39:31.419987 2167 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 17 23:39:31.424281 kubelet[2167]: I0417 23:39:31.424259 2167 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 23:39:31.425357 kubelet[2167]: I0417 23:39:31.425326 2167 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:39:31.425486 kubelet[2167]: I0417 23:39:31.425360 2167 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-238-189-76","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:39:31.425486 kubelet[2167]: I0417 23:39:31.425483 2167 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:39:31.425592 kubelet[2167]: I0417 23:39:31.425493 2167 container_manager_linux.go:306] "Creating device plugin manager" Apr 17 23:39:31.425592 kubelet[2167]: I0417 23:39:31.425566 2167 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 23:39:31.427832 kubelet[2167]: I0417 23:39:31.427812 2167 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:39:31.428744 kubelet[2167]: I0417 23:39:31.428011 2167 kubelet.go:475] "Attempting to sync node with API server" Apr 17 23:39:31.428744 kubelet[2167]: I0417 23:39:31.428034 2167 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:39:31.428744 kubelet[2167]: I0417 23:39:31.428068 2167 kubelet.go:387] "Adding apiserver pod source" Apr 17 23:39:31.428744 kubelet[2167]: I0417 23:39:31.428082 2167 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:39:31.432460 kubelet[2167]: E0417 23:39:31.431638 2167 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.238.189.76:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-238-189-76&limit=500&resourceVersion=0\": dial tcp 172.238.189.76:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:39:31.432460 kubelet[2167]: E0417 23:39:31.431779 2167 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.238.189.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.238.189.76:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:39:31.432460 kubelet[2167]: I0417 23:39:31.432148 2167 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:39:31.432560 kubelet[2167]: I0417 23:39:31.432537 2167 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:39:31.432560 kubelet[2167]: I0417 23:39:31.432558 2167 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 23:39:31.432624 kubelet[2167]: W0417 23:39:31.432604 2167 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 23:39:31.436172 kubelet[2167]: I0417 23:39:31.436154 2167 server.go:1262] "Started kubelet" Apr 17 23:39:31.444389 kubelet[2167]: I0417 23:39:31.443826 2167 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:39:31.445668 kubelet[2167]: I0417 23:39:31.445629 2167 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:39:31.445801 kubelet[2167]: I0417 23:39:31.445783 2167 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 23:39:31.446162 kubelet[2167]: I0417 23:39:31.446012 2167 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:39:31.446162 kubelet[2167]: E0417 23:39:31.444108 2167 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.238.189.76:6443/api/v1/namespaces/default/events\": dial tcp 172.238.189.76:6443: connect: connection refused" event="&Event{ObjectMeta:{172-238-189-76.18a7494c9de2034e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-238-189-76,UID:172-238-189-76,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-238-189-76,},FirstTimestamp:2026-04-17 23:39:31.436127054 +0000 UTC m=+0.296052701,LastTimestamp:2026-04-17 23:39:31.436127054 +0000 UTC m=+0.296052701,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-238-189-76,}" Apr 17 23:39:31.446268 kubelet[2167]: I0417 23:39:31.446117 2167 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:39:31.448042 kubelet[2167]: I0417 23:39:31.448029 2167 server.go:310] "Adding debug handlers to kubelet server" Apr 17 23:39:31.449649 kubelet[2167]: I0417 23:39:31.449624 2167 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:39:31.451457 kubelet[2167]: I0417 23:39:31.451313 2167 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 17 23:39:31.451457 kubelet[2167]: E0417 23:39:31.451426 2167 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-238-189-76\" not found" Apr 17 23:39:31.451842 kubelet[2167]: I0417 23:39:31.451822 2167 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 23:39:31.451875 kubelet[2167]: I0417 23:39:31.451864 2167 reconciler.go:29] "Reconciler: start to sync state" Apr 17 23:39:31.452660 kubelet[2167]: E0417 23:39:31.452122 2167 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.238.189.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.238.189.76:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:39:31.452660 kubelet[2167]: E0417 23:39:31.452174 2167 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.189.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-189-76?timeout=10s\": dial tcp 172.238.189.76:6443: connect: connection refused" interval="200ms" Apr 17 23:39:31.454279 kubelet[2167]: E0417 23:39:31.454258 2167 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:39:31.454546 kubelet[2167]: I0417 23:39:31.454341 2167 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:39:31.454546 kubelet[2167]: I0417 23:39:31.454545 2167 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:39:31.455093 kubelet[2167]: I0417 23:39:31.454594 2167 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:39:31.470156 kubelet[2167]: I0417 23:39:31.470132 2167 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 23:39:31.471814 kubelet[2167]: I0417 23:39:31.471800 2167 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 23:39:31.471894 kubelet[2167]: I0417 23:39:31.471883 2167 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 17 23:39:31.471958 kubelet[2167]: I0417 23:39:31.471948 2167 kubelet.go:2428] "Starting kubelet main sync loop" Apr 17 23:39:31.472048 kubelet[2167]: E0417 23:39:31.472033 2167 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:39:31.478085 kubelet[2167]: E0417 23:39:31.478060 2167 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.238.189.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.238.189.76:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:39:31.483771 kubelet[2167]: I0417 23:39:31.483682 2167 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:39:31.484122 kubelet[2167]: I0417 23:39:31.483922 2167 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:39:31.484122 kubelet[2167]: I0417 23:39:31.483939 2167 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:39:31.485283 kubelet[2167]: I0417 23:39:31.485264 2167 policy_none.go:49] "None policy: Start" Apr 17 23:39:31.485283 kubelet[2167]: I0417 23:39:31.485283 2167 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 23:39:31.485283 kubelet[2167]: I0417 23:39:31.485295 2167 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 23:39:31.486162 kubelet[2167]: I0417 23:39:31.486149 2167 policy_none.go:47] "Start" Apr 17 23:39:31.490430 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 17 23:39:31.502871 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 17 23:39:31.506534 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 17 23:39:31.516350 kubelet[2167]: E0417 23:39:31.515438 2167 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:39:31.516350 kubelet[2167]: I0417 23:39:31.515819 2167 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:39:31.516350 kubelet[2167]: I0417 23:39:31.515829 2167 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:39:31.516350 kubelet[2167]: I0417 23:39:31.516283 2167 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:39:31.518854 kubelet[2167]: E0417 23:39:31.518827 2167 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:39:31.519102 kubelet[2167]: E0417 23:39:31.519090 2167 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-238-189-76\" not found" Apr 17 23:39:31.582317 systemd[1]: Created slice kubepods-burstable-podf40379a9890f2b62cd9fbd27501d4bf5.slice - libcontainer container kubepods-burstable-podf40379a9890f2b62cd9fbd27501d4bf5.slice. Apr 17 23:39:31.594419 kubelet[2167]: E0417 23:39:31.594396 2167 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-189-76\" not found" node="172-238-189-76" Apr 17 23:39:31.596100 systemd[1]: Created slice kubepods-burstable-pod9b8b2fe00ec931fffd1383b3a59d1b40.slice - libcontainer container kubepods-burstable-pod9b8b2fe00ec931fffd1383b3a59d1b40.slice. Apr 17 23:39:31.600970 kubelet[2167]: E0417 23:39:31.600947 2167 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-189-76\" not found" node="172-238-189-76" Apr 17 23:39:31.603616 systemd[1]: Created slice kubepods-burstable-poddf514b4cf88bedfd9cf8ad26656d42cf.slice - libcontainer container kubepods-burstable-poddf514b4cf88bedfd9cf8ad26656d42cf.slice. Apr 17 23:39:31.605372 kubelet[2167]: E0417 23:39:31.605339 2167 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-189-76\" not found" node="172-238-189-76" Apr 17 23:39:31.617386 kubelet[2167]: I0417 23:39:31.617370 2167 kubelet_node_status.go:75] "Attempting to register node" node="172-238-189-76" Apr 17 23:39:31.617608 kubelet[2167]: E0417 23:39:31.617590 2167 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.238.189.76:6443/api/v1/nodes\": dial tcp 172.238.189.76:6443: connect: connection refused" node="172-238-189-76" Apr 17 23:39:31.652906 kubelet[2167]: E0417 23:39:31.652879 2167 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.189.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-189-76?timeout=10s\": dial tcp 172.238.189.76:6443: connect: connection refused" interval="400ms" Apr 17 23:39:31.753317 kubelet[2167]: I0417 23:39:31.753238 2167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f40379a9890f2b62cd9fbd27501d4bf5-flexvolume-dir\") pod \"kube-controller-manager-172-238-189-76\" (UID: \"f40379a9890f2b62cd9fbd27501d4bf5\") " pod="kube-system/kube-controller-manager-172-238-189-76" Apr 17 23:39:31.753317 kubelet[2167]: I0417 23:39:31.753266 2167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f40379a9890f2b62cd9fbd27501d4bf5-k8s-certs\") pod \"kube-controller-manager-172-238-189-76\" (UID: \"f40379a9890f2b62cd9fbd27501d4bf5\") " pod="kube-system/kube-controller-manager-172-238-189-76" Apr 17 23:39:31.753317 kubelet[2167]: I0417 23:39:31.753285 2167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f40379a9890f2b62cd9fbd27501d4bf5-usr-share-ca-certificates\") pod \"kube-controller-manager-172-238-189-76\" (UID: \"f40379a9890f2b62cd9fbd27501d4bf5\") " pod="kube-system/kube-controller-manager-172-238-189-76" Apr 17 23:39:31.753317 kubelet[2167]: I0417 23:39:31.753300 2167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df514b4cf88bedfd9cf8ad26656d42cf-usr-share-ca-certificates\") pod \"kube-apiserver-172-238-189-76\" (UID: \"df514b4cf88bedfd9cf8ad26656d42cf\") " pod="kube-system/kube-apiserver-172-238-189-76" Apr 17 23:39:31.753317 kubelet[2167]: I0417 23:39:31.753314 2167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f40379a9890f2b62cd9fbd27501d4bf5-kubeconfig\") pod \"kube-controller-manager-172-238-189-76\" (UID: \"f40379a9890f2b62cd9fbd27501d4bf5\") " pod="kube-system/kube-controller-manager-172-238-189-76" Apr 17 23:39:31.754487 kubelet[2167]: I0417 23:39:31.753327 2167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b2fe00ec931fffd1383b3a59d1b40-kubeconfig\") pod \"kube-scheduler-172-238-189-76\" (UID: \"9b8b2fe00ec931fffd1383b3a59d1b40\") " pod="kube-system/kube-scheduler-172-238-189-76" Apr 17 23:39:31.754487 kubelet[2167]: I0417 23:39:31.753339 2167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df514b4cf88bedfd9cf8ad26656d42cf-ca-certs\") pod \"kube-apiserver-172-238-189-76\" (UID: \"df514b4cf88bedfd9cf8ad26656d42cf\") " pod="kube-system/kube-apiserver-172-238-189-76" Apr 17 23:39:31.754487 kubelet[2167]: I0417 23:39:31.753351 2167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df514b4cf88bedfd9cf8ad26656d42cf-k8s-certs\") pod \"kube-apiserver-172-238-189-76\" (UID: \"df514b4cf88bedfd9cf8ad26656d42cf\") " pod="kube-system/kube-apiserver-172-238-189-76" Apr 17 23:39:31.754487 kubelet[2167]: I0417 23:39:31.753363 2167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f40379a9890f2b62cd9fbd27501d4bf5-ca-certs\") pod \"kube-controller-manager-172-238-189-76\" (UID: \"f40379a9890f2b62cd9fbd27501d4bf5\") " pod="kube-system/kube-controller-manager-172-238-189-76" Apr 17 23:39:31.819625 kubelet[2167]: I0417 23:39:31.819608 2167 kubelet_node_status.go:75] "Attempting to register node" node="172-238-189-76" Apr 17 23:39:31.821707 kubelet[2167]: E0417 23:39:31.820213 2167 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.238.189.76:6443/api/v1/nodes\": dial tcp 172.238.189.76:6443: connect: connection refused" node="172-238-189-76" Apr 17 23:39:31.896915 kubelet[2167]: E0417 23:39:31.896877 2167 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:31.897677 containerd[1456]: time="2026-04-17T23:39:31.897644954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-238-189-76,Uid:f40379a9890f2b62cd9fbd27501d4bf5,Namespace:kube-system,Attempt:0,}" Apr 17 23:39:31.903172 kubelet[2167]: E0417 23:39:31.903154 2167 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:31.903633 containerd[1456]: time="2026-04-17T23:39:31.903608654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-238-189-76,Uid:9b8b2fe00ec931fffd1383b3a59d1b40,Namespace:kube-system,Attempt:0,}" Apr 17 23:39:31.906481 kubelet[2167]: E0417 23:39:31.906464 2167 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:31.906933 containerd[1456]: time="2026-04-17T23:39:31.906913324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-238-189-76,Uid:df514b4cf88bedfd9cf8ad26656d42cf,Namespace:kube-system,Attempt:0,}" Apr 17 23:39:32.053911 kubelet[2167]: E0417 23:39:32.053883 2167 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.189.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-189-76?timeout=10s\": dial tcp 172.238.189.76:6443: connect: connection refused" interval="800ms" Apr 17 23:39:32.103439 kubelet[2167]: E0417 23:39:32.103353 2167 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.238.189.76:6443/api/v1/namespaces/default/events\": dial tcp 172.238.189.76:6443: connect: connection refused" event="&Event{ObjectMeta:{172-238-189-76.18a7494c9de2034e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-238-189-76,UID:172-238-189-76,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-238-189-76,},FirstTimestamp:2026-04-17 23:39:31.436127054 +0000 UTC m=+0.296052701,LastTimestamp:2026-04-17 23:39:31.436127054 +0000 UTC m=+0.296052701,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-238-189-76,}" Apr 17 23:39:32.222283 kubelet[2167]: I0417 23:39:32.222265 2167 kubelet_node_status.go:75] "Attempting to register node" node="172-238-189-76" Apr 17 23:39:32.222847 kubelet[2167]: E0417 23:39:32.222718 2167 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.238.189.76:6443/api/v1/nodes\": dial tcp 172.238.189.76:6443: connect: connection refused" node="172-238-189-76" Apr 17 23:39:32.372767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2318135614.mount: Deactivated successfully. Apr 17 23:39:32.377578 containerd[1456]: time="2026-04-17T23:39:32.377538364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:39:32.378227 containerd[1456]: time="2026-04-17T23:39:32.378160484Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312062" Apr 17 23:39:32.379013 containerd[1456]: time="2026-04-17T23:39:32.378974994Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:39:32.380297 containerd[1456]: time="2026-04-17T23:39:32.380273794Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:39:32.381108 containerd[1456]: time="2026-04-17T23:39:32.381083794Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:39:32.381606 containerd[1456]: time="2026-04-17T23:39:32.381529774Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:39:32.382175 containerd[1456]: time="2026-04-17T23:39:32.382115734Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:39:32.384199 containerd[1456]: time="2026-04-17T23:39:32.384155064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:39:32.385169 containerd[1456]: time="2026-04-17T23:39:32.385141814Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 487.41137ms" Apr 17 23:39:32.386606 containerd[1456]: time="2026-04-17T23:39:32.386535794Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 479.58175ms" Apr 17 23:39:32.388676 containerd[1456]: time="2026-04-17T23:39:32.388263164Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 484.60957ms" Apr 17 23:39:32.475751 containerd[1456]: time="2026-04-17T23:39:32.474451284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:32.475751 containerd[1456]: time="2026-04-17T23:39:32.474503624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:32.475751 containerd[1456]: time="2026-04-17T23:39:32.474527004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:32.475751 containerd[1456]: time="2026-04-17T23:39:32.475408094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:32.479118 containerd[1456]: time="2026-04-17T23:39:32.479050844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:32.479157 containerd[1456]: time="2026-04-17T23:39:32.479132144Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:32.479192 containerd[1456]: time="2026-04-17T23:39:32.479159114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:32.479353 containerd[1456]: time="2026-04-17T23:39:32.479318614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:32.485352 containerd[1456]: time="2026-04-17T23:39:32.484986334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:32.485352 containerd[1456]: time="2026-04-17T23:39:32.485071554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:32.485352 containerd[1456]: time="2026-04-17T23:39:32.485101644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:32.485352 containerd[1456]: time="2026-04-17T23:39:32.485203024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:32.505342 systemd[1]: Started cri-containerd-c1375699caa1f52c2c140c479d43edf4af24f2440f2297803ace6765d8e9cf1f.scope - libcontainer container c1375699caa1f52c2c140c479d43edf4af24f2440f2297803ace6765d8e9cf1f. Apr 17 23:39:32.520340 systemd[1]: Started cri-containerd-f30cdce3d6cca87602759d7d500ee2f90234e96d65be6e0263f7392e9dea7dc6.scope - libcontainer container f30cdce3d6cca87602759d7d500ee2f90234e96d65be6e0263f7392e9dea7dc6. Apr 17 23:39:32.528670 systemd[1]: Started cri-containerd-3b3d49cc52060a97556781ec7a88face2753739b95ba985a63d6510f82f9d2dd.scope - libcontainer container 3b3d49cc52060a97556781ec7a88face2753739b95ba985a63d6510f82f9d2dd. Apr 17 23:39:32.591662 containerd[1456]: time="2026-04-17T23:39:32.591435094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-238-189-76,Uid:df514b4cf88bedfd9cf8ad26656d42cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1375699caa1f52c2c140c479d43edf4af24f2440f2297803ace6765d8e9cf1f\"" Apr 17 23:39:32.594435 kubelet[2167]: E0417 23:39:32.594344 2167 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:32.602685 containerd[1456]: time="2026-04-17T23:39:32.602649864Z" level=info msg="CreateContainer within sandbox \"c1375699caa1f52c2c140c479d43edf4af24f2440f2297803ace6765d8e9cf1f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 23:39:32.603160 containerd[1456]: time="2026-04-17T23:39:32.602962314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-238-189-76,Uid:9b8b2fe00ec931fffd1383b3a59d1b40,Namespace:kube-system,Attempt:0,} returns sandbox id \"f30cdce3d6cca87602759d7d500ee2f90234e96d65be6e0263f7392e9dea7dc6\"" Apr 17 23:39:32.603912 kubelet[2167]: E0417 23:39:32.603834 2167 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.238.189.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.238.189.76:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:39:32.604158 kubelet[2167]: E0417 23:39:32.604126 2167 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:32.607181 containerd[1456]: time="2026-04-17T23:39:32.607161574Z" level=info msg="CreateContainer within sandbox \"f30cdce3d6cca87602759d7d500ee2f90234e96d65be6e0263f7392e9dea7dc6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 23:39:32.614687 containerd[1456]: time="2026-04-17T23:39:32.614652484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-238-189-76,Uid:f40379a9890f2b62cd9fbd27501d4bf5,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b3d49cc52060a97556781ec7a88face2753739b95ba985a63d6510f82f9d2dd\"" Apr 17 23:39:32.619035 kubelet[2167]: E0417 23:39:32.619020 2167 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:32.621921 containerd[1456]: time="2026-04-17T23:39:32.621829274Z" level=info msg="CreateContainer within sandbox \"c1375699caa1f52c2c140c479d43edf4af24f2440f2297803ace6765d8e9cf1f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bef74d7561cf443c2fdde8656d2fefba2c0f31f70aa7f987b50623a7802c218a\"" Apr 17 23:39:32.623284 containerd[1456]: time="2026-04-17T23:39:32.623210554Z" level=info msg="CreateContainer within sandbox \"f30cdce3d6cca87602759d7d500ee2f90234e96d65be6e0263f7392e9dea7dc6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bb61cb10332cea76d5430bb0cce79462d788f96589370205da12f52bec27855f\"" Apr 17 23:39:32.624191 containerd[1456]: time="2026-04-17T23:39:32.623631904Z" level=info msg="StartContainer for \"bef74d7561cf443c2fdde8656d2fefba2c0f31f70aa7f987b50623a7802c218a\"" Apr 17 23:39:32.625718 containerd[1456]: time="2026-04-17T23:39:32.624657004Z" level=info msg="CreateContainer within sandbox \"3b3d49cc52060a97556781ec7a88face2753739b95ba985a63d6510f82f9d2dd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 23:39:32.625754 containerd[1456]: time="2026-04-17T23:39:32.625719154Z" level=info msg="StartContainer for \"bb61cb10332cea76d5430bb0cce79462d788f96589370205da12f52bec27855f\"" Apr 17 23:39:32.637848 containerd[1456]: time="2026-04-17T23:39:32.637820004Z" level=info msg="CreateContainer within sandbox \"3b3d49cc52060a97556781ec7a88face2753739b95ba985a63d6510f82f9d2dd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d8a5b312572a1da014afdfcd576227e6385c83dfad48a85234008dc424ea20bb\"" Apr 17 23:39:32.638407 containerd[1456]: time="2026-04-17T23:39:32.638378624Z" level=info msg="StartContainer for \"d8a5b312572a1da014afdfcd576227e6385c83dfad48a85234008dc424ea20bb\"" Apr 17 23:39:32.671900 systemd[1]: Started cri-containerd-bb61cb10332cea76d5430bb0cce79462d788f96589370205da12f52bec27855f.scope - libcontainer container bb61cb10332cea76d5430bb0cce79462d788f96589370205da12f52bec27855f. Apr 17 23:39:32.675993 systemd[1]: Started cri-containerd-bef74d7561cf443c2fdde8656d2fefba2c0f31f70aa7f987b50623a7802c218a.scope - libcontainer container bef74d7561cf443c2fdde8656d2fefba2c0f31f70aa7f987b50623a7802c218a. Apr 17 23:39:32.703817 systemd[1]: Started cri-containerd-d8a5b312572a1da014afdfcd576227e6385c83dfad48a85234008dc424ea20bb.scope - libcontainer container d8a5b312572a1da014afdfcd576227e6385c83dfad48a85234008dc424ea20bb. Apr 17 23:39:32.728711 kubelet[2167]: E0417 23:39:32.727084 2167 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.238.189.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.238.189.76:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:39:32.750896 containerd[1456]: time="2026-04-17T23:39:32.750861264Z" level=info msg="StartContainer for \"bb61cb10332cea76d5430bb0cce79462d788f96589370205da12f52bec27855f\" returns successfully" Apr 17 23:39:32.778897 containerd[1456]: time="2026-04-17T23:39:32.778813884Z" level=info msg="StartContainer for \"bef74d7561cf443c2fdde8656d2fefba2c0f31f70aa7f987b50623a7802c218a\" returns successfully" Apr 17 23:39:32.786930 containerd[1456]: time="2026-04-17T23:39:32.786902484Z" level=info msg="StartContainer for \"d8a5b312572a1da014afdfcd576227e6385c83dfad48a85234008dc424ea20bb\" returns successfully" Apr 17 23:39:33.024721 kubelet[2167]: I0417 23:39:33.024625 2167 kubelet_node_status.go:75] "Attempting to register node" node="172-238-189-76" Apr 17 23:39:33.491499 kubelet[2167]: E0417 23:39:33.491252 2167 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-189-76\" not found" node="172-238-189-76" Apr 17 23:39:33.491499 kubelet[2167]: E0417 23:39:33.491366 2167 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:33.494633 kubelet[2167]: E0417 23:39:33.494104 2167 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-189-76\" not found" node="172-238-189-76" Apr 17 23:39:33.494633 kubelet[2167]: E0417 23:39:33.494184 2167 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:33.497984 kubelet[2167]: E0417 23:39:33.497893 2167 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-189-76\" not found" node="172-238-189-76" Apr 17 23:39:33.498050 kubelet[2167]: E0417 23:39:33.498038 2167 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:33.862085 kubelet[2167]: E0417 23:39:33.861968 2167 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-238-189-76\" not found" node="172-238-189-76" Apr 17 23:39:33.896053 kubelet[2167]: I0417 23:39:33.894824 2167 kubelet_node_status.go:78] "Successfully registered node" node="172-238-189-76" Apr 17 23:39:33.952562 kubelet[2167]: I0417 23:39:33.952195 2167 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-238-189-76" Apr 17 23:39:33.959215 kubelet[2167]: E0417 23:39:33.959185 2167 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-238-189-76\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-238-189-76" Apr 17 23:39:33.959215 kubelet[2167]: I0417 23:39:33.959208 2167 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-189-76" Apr 17 23:39:33.960238 kubelet[2167]: E0417 23:39:33.960119 2167 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-238-189-76\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-238-189-76" Apr 17 23:39:33.960238 kubelet[2167]: I0417 23:39:33.960135 2167 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-189-76" Apr 17 23:39:33.961518 kubelet[2167]: E0417 23:39:33.961476 2167 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-238-189-76\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-238-189-76" Apr 17 23:39:34.429298 kubelet[2167]: I0417 23:39:34.429255 2167 apiserver.go:52] "Watching apiserver" Apr 17 23:39:34.452465 kubelet[2167]: I0417 23:39:34.452443 2167 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 23:39:34.500550 kubelet[2167]: I0417 23:39:34.500527 2167 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-189-76" Apr 17 23:39:34.501898 kubelet[2167]: I0417 23:39:34.501793 2167 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-189-76" Apr 17 23:39:34.503204 kubelet[2167]: E0417 23:39:34.503170 2167 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-238-189-76\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-238-189-76" Apr 17 23:39:34.503330 kubelet[2167]: E0417 23:39:34.503313 2167 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:34.503365 kubelet[2167]: E0417 23:39:34.503170 2167 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-238-189-76\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-238-189-76" Apr 17 23:39:34.503627 kubelet[2167]: E0417 23:39:34.503613 2167 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:35.501969 kubelet[2167]: I0417 23:39:35.501933 2167 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-189-76" Apr 17 23:39:35.507184 kubelet[2167]: E0417 23:39:35.506986 2167 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:36.091249 systemd[1]: Reloading requested from client PID 2449 ('systemctl') (unit session-7.scope)... Apr 17 23:39:36.091263 systemd[1]: Reloading... Apr 17 23:39:36.213164 zram_generator::config[2490]: No configuration found. Apr 17 23:39:36.335172 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:39:36.429239 systemd[1]: Reloading finished in 337 ms. Apr 17 23:39:36.474059 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:39:36.483475 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:39:36.483756 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:39:36.488961 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:39:36.651671 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:39:36.662016 (kubelet)[2540]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:39:36.695883 kubelet[2540]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:39:36.695883 kubelet[2540]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:39:36.696282 kubelet[2540]: I0417 23:39:36.696178 2540 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:39:36.701795 kubelet[2540]: I0417 23:39:36.701781 2540 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 17 23:39:36.701795 kubelet[2540]: I0417 23:39:36.701794 2540 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:39:36.701893 kubelet[2540]: I0417 23:39:36.701817 2540 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 23:39:36.701893 kubelet[2540]: I0417 23:39:36.701827 2540 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:39:36.701971 kubelet[2540]: I0417 23:39:36.701958 2540 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:39:36.702839 kubelet[2540]: I0417 23:39:36.702812 2540 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 23:39:36.706079 kubelet[2540]: I0417 23:39:36.706059 2540 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:39:36.710801 kubelet[2540]: E0417 23:39:36.710155 2540 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:39:36.710801 kubelet[2540]: I0417 23:39:36.710188 2540 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 17 23:39:36.713046 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 17 23:39:36.715308 kubelet[2540]: I0417 23:39:36.715168 2540 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 23:39:36.715838 kubelet[2540]: I0417 23:39:36.715784 2540 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:39:36.716260 kubelet[2540]: I0417 23:39:36.715831 2540 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-238-189-76","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:39:36.716851 kubelet[2540]: I0417 23:39:36.716649 2540 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:39:36.716851 kubelet[2540]: I0417 23:39:36.716668 2540 container_manager_linux.go:306] "Creating device plugin manager" Apr 17 23:39:36.716851 kubelet[2540]: I0417 23:39:36.716746 2540 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 23:39:36.717670 kubelet[2540]: I0417 23:39:36.716942 2540 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:39:36.717670 kubelet[2540]: I0417 23:39:36.717148 2540 kubelet.go:475] "Attempting to sync node with API server" Apr 17 23:39:36.717670 kubelet[2540]: I0417 23:39:36.717164 2540 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:39:36.717670 kubelet[2540]: I0417 23:39:36.717184 2540 kubelet.go:387] "Adding apiserver pod source" Apr 17 23:39:36.718784 kubelet[2540]: I0417 23:39:36.718770 2540 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:39:36.721865 kubelet[2540]: I0417 23:39:36.721781 2540 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:39:36.724085 kubelet[2540]: I0417 23:39:36.722671 2540 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:39:36.725083 kubelet[2540]: I0417 23:39:36.724989 2540 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 23:39:36.733755 kubelet[2540]: I0417 23:39:36.732200 2540 server.go:1262] "Started kubelet" Apr 17 23:39:36.733755 kubelet[2540]: I0417 23:39:36.732416 2540 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:39:36.733755 kubelet[2540]: I0417 23:39:36.732427 2540 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:39:36.733755 kubelet[2540]: I0417 23:39:36.732456 2540 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 23:39:36.733755 kubelet[2540]: I0417 23:39:36.733271 2540 server.go:310] "Adding debug handlers to kubelet server" Apr 17 23:39:36.735711 kubelet[2540]: I0417 23:39:36.734389 2540 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:39:36.737937 kubelet[2540]: I0417 23:39:36.737804 2540 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:39:36.740418 kubelet[2540]: I0417 23:39:36.739762 2540 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:39:36.743351 kubelet[2540]: I0417 23:39:36.743070 2540 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 17 23:39:36.743351 kubelet[2540]: I0417 23:39:36.743140 2540 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 23:39:36.743351 kubelet[2540]: I0417 23:39:36.743245 2540 reconciler.go:29] "Reconciler: start to sync state" Apr 17 23:39:36.745809 kubelet[2540]: I0417 23:39:36.745442 2540 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:39:36.745809 kubelet[2540]: I0417 23:39:36.745530 2540 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:39:36.748056 kubelet[2540]: I0417 23:39:36.748039 2540 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:39:36.751878 kubelet[2540]: I0417 23:39:36.751779 2540 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 23:39:36.763785 kubelet[2540]: I0417 23:39:36.763596 2540 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 23:39:36.763785 kubelet[2540]: I0417 23:39:36.763611 2540 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 17 23:39:36.763785 kubelet[2540]: I0417 23:39:36.763626 2540 kubelet.go:2428] "Starting kubelet main sync loop" Apr 17 23:39:36.763785 kubelet[2540]: E0417 23:39:36.763664 2540 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:39:36.798316 kubelet[2540]: I0417 23:39:36.798286 2540 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:39:36.798316 kubelet[2540]: I0417 23:39:36.798305 2540 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:39:36.798316 kubelet[2540]: I0417 23:39:36.798321 2540 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:39:36.798435 kubelet[2540]: I0417 23:39:36.798422 2540 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 17 23:39:36.798458 kubelet[2540]: I0417 23:39:36.798431 2540 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 17 23:39:36.798458 kubelet[2540]: I0417 23:39:36.798446 2540 policy_none.go:49] "None policy: Start" Apr 17 23:39:36.798458 kubelet[2540]: I0417 23:39:36.798454 2540 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 23:39:36.798688 kubelet[2540]: I0417 23:39:36.798465 2540 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 23:39:36.798738 kubelet[2540]: I0417 23:39:36.798727 2540 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 17 23:39:36.798759 kubelet[2540]: I0417 23:39:36.798739 2540 policy_none.go:47] "Start" Apr 17 23:39:36.803612 kubelet[2540]: E0417 23:39:36.803586 2540 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:39:36.803785 kubelet[2540]: I0417 23:39:36.803767 2540 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:39:36.803821 kubelet[2540]: I0417 23:39:36.803784 2540 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:39:36.804210 kubelet[2540]: I0417 23:39:36.804188 2540 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:39:36.805118 kubelet[2540]: E0417 23:39:36.805069 2540 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:39:36.864842 kubelet[2540]: I0417 23:39:36.864800 2540 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-189-76" Apr 17 23:39:36.865106 kubelet[2540]: I0417 23:39:36.864958 2540 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-189-76" Apr 17 23:39:36.865157 kubelet[2540]: I0417 23:39:36.865108 2540 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-238-189-76" Apr 17 23:39:36.873169 kubelet[2540]: E0417 23:39:36.873143 2540 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-238-189-76\" already exists" pod="kube-system/kube-apiserver-172-238-189-76" Apr 17 23:39:36.915691 kubelet[2540]: I0417 23:39:36.915671 2540 kubelet_node_status.go:75] "Attempting to register node" node="172-238-189-76" Apr 17 23:39:36.922816 kubelet[2540]: I0417 23:39:36.922397 2540 kubelet_node_status.go:124] "Node was previously registered" node="172-238-189-76" Apr 17 23:39:36.922816 kubelet[2540]: I0417 23:39:36.922455 2540 kubelet_node_status.go:78] "Successfully registered node" node="172-238-189-76" Apr 17 23:39:36.944603 kubelet[2540]: I0417 23:39:36.944560 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df514b4cf88bedfd9cf8ad26656d42cf-ca-certs\") pod \"kube-apiserver-172-238-189-76\" (UID: \"df514b4cf88bedfd9cf8ad26656d42cf\") " pod="kube-system/kube-apiserver-172-238-189-76" Apr 17 23:39:36.944603 kubelet[2540]: I0417 23:39:36.944600 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df514b4cf88bedfd9cf8ad26656d42cf-k8s-certs\") pod \"kube-apiserver-172-238-189-76\" (UID: \"df514b4cf88bedfd9cf8ad26656d42cf\") " pod="kube-system/kube-apiserver-172-238-189-76" Apr 17 23:39:36.944776 kubelet[2540]: I0417 23:39:36.944620 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df514b4cf88bedfd9cf8ad26656d42cf-usr-share-ca-certificates\") pod \"kube-apiserver-172-238-189-76\" (UID: \"df514b4cf88bedfd9cf8ad26656d42cf\") " pod="kube-system/kube-apiserver-172-238-189-76" Apr 17 23:39:36.944776 kubelet[2540]: I0417 23:39:36.944640 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f40379a9890f2b62cd9fbd27501d4bf5-ca-certs\") pod \"kube-controller-manager-172-238-189-76\" (UID: \"f40379a9890f2b62cd9fbd27501d4bf5\") " pod="kube-system/kube-controller-manager-172-238-189-76" Apr 17 23:39:36.944776 kubelet[2540]: I0417 23:39:36.944657 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f40379a9890f2b62cd9fbd27501d4bf5-kubeconfig\") pod \"kube-controller-manager-172-238-189-76\" (UID: \"f40379a9890f2b62cd9fbd27501d4bf5\") " pod="kube-system/kube-controller-manager-172-238-189-76" Apr 17 23:39:36.944776 kubelet[2540]: I0417 23:39:36.944682 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b2fe00ec931fffd1383b3a59d1b40-kubeconfig\") pod \"kube-scheduler-172-238-189-76\" (UID: \"9b8b2fe00ec931fffd1383b3a59d1b40\") " pod="kube-system/kube-scheduler-172-238-189-76" Apr 17 23:39:36.944776 kubelet[2540]: I0417 23:39:36.944721 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f40379a9890f2b62cd9fbd27501d4bf5-flexvolume-dir\") pod \"kube-controller-manager-172-238-189-76\" (UID: \"f40379a9890f2b62cd9fbd27501d4bf5\") " pod="kube-system/kube-controller-manager-172-238-189-76" Apr 17 23:39:36.944913 kubelet[2540]: I0417 23:39:36.944760 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f40379a9890f2b62cd9fbd27501d4bf5-k8s-certs\") pod \"kube-controller-manager-172-238-189-76\" (UID: \"f40379a9890f2b62cd9fbd27501d4bf5\") " pod="kube-system/kube-controller-manager-172-238-189-76" Apr 17 23:39:36.944913 kubelet[2540]: I0417 23:39:36.944779 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f40379a9890f2b62cd9fbd27501d4bf5-usr-share-ca-certificates\") pod \"kube-controller-manager-172-238-189-76\" (UID: \"f40379a9890f2b62cd9fbd27501d4bf5\") " pod="kube-system/kube-controller-manager-172-238-189-76" Apr 17 23:39:37.174501 kubelet[2540]: E0417 23:39:37.174454 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:37.174678 kubelet[2540]: E0417 23:39:37.174638 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:37.174766 kubelet[2540]: E0417 23:39:37.174228 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:37.720022 kubelet[2540]: I0417 23:39:37.719973 2540 apiserver.go:52] "Watching apiserver" Apr 17 23:39:37.743505 kubelet[2540]: I0417 23:39:37.743479 2540 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 23:39:37.784033 kubelet[2540]: I0417 23:39:37.783310 2540 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-189-76" Apr 17 23:39:37.784392 kubelet[2540]: I0417 23:39:37.784377 2540 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-189-76" Apr 17 23:39:37.789829 kubelet[2540]: E0417 23:39:37.789812 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:37.803444 kubelet[2540]: E0417 23:39:37.803113 2540 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-238-189-76\" already exists" pod="kube-system/kube-scheduler-172-238-189-76" Apr 17 23:39:37.803444 kubelet[2540]: E0417 23:39:37.803228 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:37.805164 kubelet[2540]: E0417 23:39:37.805062 2540 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-238-189-76\" already exists" pod="kube-system/kube-apiserver-172-238-189-76" Apr 17 23:39:37.805724 kubelet[2540]: E0417 23:39:37.805334 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:37.878847 kubelet[2540]: I0417 23:39:37.878088 2540 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-238-189-76" podStartSLOduration=1.878072454 podStartE2EDuration="1.878072454s" podCreationTimestamp="2026-04-17 23:39:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:39:37.867282784 +0000 UTC m=+1.201180151" watchObservedRunningTime="2026-04-17 23:39:37.878072454 +0000 UTC m=+1.211969831" Apr 17 23:39:37.886617 kubelet[2540]: I0417 23:39:37.886575 2540 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-238-189-76" podStartSLOduration=1.8865619040000001 podStartE2EDuration="1.886561904s" podCreationTimestamp="2026-04-17 23:39:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:39:37.886441834 +0000 UTC m=+1.220339201" watchObservedRunningTime="2026-04-17 23:39:37.886561904 +0000 UTC m=+1.220459271" Apr 17 23:39:37.886741 kubelet[2540]: I0417 23:39:37.886644 2540 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-238-189-76" podStartSLOduration=2.886640424 podStartE2EDuration="2.886640424s" podCreationTimestamp="2026-04-17 23:39:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:39:37.879118134 +0000 UTC m=+1.213015501" watchObservedRunningTime="2026-04-17 23:39:37.886640424 +0000 UTC m=+1.220537801" Apr 17 23:39:38.784998 kubelet[2540]: E0417 23:39:38.784956 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:38.785530 kubelet[2540]: E0417 23:39:38.785514 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:39.780546 kubelet[2540]: E0417 23:39:39.780511 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:41.146594 kubelet[2540]: I0417 23:39:41.146553 2540 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 23:39:41.147387 containerd[1456]: time="2026-04-17T23:39:41.147082288Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 23:39:41.147815 kubelet[2540]: I0417 23:39:41.147773 2540 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 23:39:42.051786 systemd[1]: Created slice kubepods-besteffort-pod210ca33f_0443_491e_ae12_aae50b7d3c62.slice - libcontainer container kubepods-besteffort-pod210ca33f_0443_491e_ae12_aae50b7d3c62.slice. Apr 17 23:39:42.079739 kubelet[2540]: I0417 23:39:42.077590 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/210ca33f-0443-491e-ae12-aae50b7d3c62-xtables-lock\") pod \"kube-proxy-66thh\" (UID: \"210ca33f-0443-491e-ae12-aae50b7d3c62\") " pod="kube-system/kube-proxy-66thh" Apr 17 23:39:42.079739 kubelet[2540]: I0417 23:39:42.077618 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz7cg\" (UniqueName: \"kubernetes.io/projected/210ca33f-0443-491e-ae12-aae50b7d3c62-kube-api-access-sz7cg\") pod \"kube-proxy-66thh\" (UID: \"210ca33f-0443-491e-ae12-aae50b7d3c62\") " pod="kube-system/kube-proxy-66thh" Apr 17 23:39:42.079739 kubelet[2540]: I0417 23:39:42.077803 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/210ca33f-0443-491e-ae12-aae50b7d3c62-kube-proxy\") pod \"kube-proxy-66thh\" (UID: \"210ca33f-0443-491e-ae12-aae50b7d3c62\") " pod="kube-system/kube-proxy-66thh" Apr 17 23:39:42.079739 kubelet[2540]: I0417 23:39:42.077815 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/210ca33f-0443-491e-ae12-aae50b7d3c62-lib-modules\") pod \"kube-proxy-66thh\" (UID: \"210ca33f-0443-491e-ae12-aae50b7d3c62\") " pod="kube-system/kube-proxy-66thh" Apr 17 23:39:42.359975 kubelet[2540]: E0417 23:39:42.359865 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:42.361862 containerd[1456]: time="2026-04-17T23:39:42.361225702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-66thh,Uid:210ca33f-0443-491e-ae12-aae50b7d3c62,Namespace:kube-system,Attempt:0,}" Apr 17 23:39:42.406499 containerd[1456]: time="2026-04-17T23:39:42.406015574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:42.406499 containerd[1456]: time="2026-04-17T23:39:42.406075334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:42.406499 containerd[1456]: time="2026-04-17T23:39:42.406100284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:42.406499 containerd[1456]: time="2026-04-17T23:39:42.406210473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:42.444851 systemd[1]: Started cri-containerd-c1228211b5670ef684db4345976e47b592df07ea6bbdfbed01d756190b9d3110.scope - libcontainer container c1228211b5670ef684db4345976e47b592df07ea6bbdfbed01d756190b9d3110. Apr 17 23:39:42.455869 systemd[1]: Created slice kubepods-besteffort-pod39831f24_ec91_448e_be26_c9a7dab0ebb6.slice - libcontainer container kubepods-besteffort-pod39831f24_ec91_448e_be26_c9a7dab0ebb6.slice. Apr 17 23:39:42.481209 kubelet[2540]: I0417 23:39:42.481100 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/39831f24-ec91-448e-be26-c9a7dab0ebb6-var-lib-calico\") pod \"tigera-operator-5588576f44-vwxv5\" (UID: \"39831f24-ec91-448e-be26-c9a7dab0ebb6\") " pod="tigera-operator/tigera-operator-5588576f44-vwxv5" Apr 17 23:39:42.481209 kubelet[2540]: I0417 23:39:42.481176 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5pb5\" (UniqueName: \"kubernetes.io/projected/39831f24-ec91-448e-be26-c9a7dab0ebb6-kube-api-access-b5pb5\") pod \"tigera-operator-5588576f44-vwxv5\" (UID: \"39831f24-ec91-448e-be26-c9a7dab0ebb6\") " pod="tigera-operator/tigera-operator-5588576f44-vwxv5" Apr 17 23:39:42.486186 containerd[1456]: time="2026-04-17T23:39:42.486002016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-66thh,Uid:210ca33f-0443-491e-ae12-aae50b7d3c62,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1228211b5670ef684db4345976e47b592df07ea6bbdfbed01d756190b9d3110\"" Apr 17 23:39:42.487769 kubelet[2540]: E0417 23:39:42.487150 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:42.491909 containerd[1456]: time="2026-04-17T23:39:42.491878852Z" level=info msg="CreateContainer within sandbox \"c1228211b5670ef684db4345976e47b592df07ea6bbdfbed01d756190b9d3110\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 23:39:42.505361 containerd[1456]: time="2026-04-17T23:39:42.505337539Z" level=info msg="CreateContainer within sandbox \"c1228211b5670ef684db4345976e47b592df07ea6bbdfbed01d756190b9d3110\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"59fdc940c067d51da350db741c5b29d7774e2ee6a62d48cb00ba8b9cb600287b\"" Apr 17 23:39:42.507968 containerd[1456]: time="2026-04-17T23:39:42.507920748Z" level=info msg="StartContainer for \"59fdc940c067d51da350db741c5b29d7774e2ee6a62d48cb00ba8b9cb600287b\"" Apr 17 23:39:42.543866 systemd[1]: Started cri-containerd-59fdc940c067d51da350db741c5b29d7774e2ee6a62d48cb00ba8b9cb600287b.scope - libcontainer container 59fdc940c067d51da350db741c5b29d7774e2ee6a62d48cb00ba8b9cb600287b. Apr 17 23:39:42.575723 containerd[1456]: time="2026-04-17T23:39:42.575439130Z" level=info msg="StartContainer for \"59fdc940c067d51da350db741c5b29d7774e2ee6a62d48cb00ba8b9cb600287b\" returns successfully" Apr 17 23:39:42.761198 containerd[1456]: time="2026-04-17T23:39:42.761078762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-vwxv5,Uid:39831f24-ec91-448e-be26-c9a7dab0ebb6,Namespace:tigera-operator,Attempt:0,}" Apr 17 23:39:42.792482 kubelet[2540]: E0417 23:39:42.792129 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:42.799977 containerd[1456]: time="2026-04-17T23:39:42.799750808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:42.799977 containerd[1456]: time="2026-04-17T23:39:42.799833687Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:42.799977 containerd[1456]: time="2026-04-17T23:39:42.799844487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:42.800439 containerd[1456]: time="2026-04-17T23:39:42.800341555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:42.825056 systemd[1]: Started cri-containerd-47663a19fedad501a18358aff22e0ed4dc17baa56843390997af694b3398be3b.scope - libcontainer container 47663a19fedad501a18358aff22e0ed4dc17baa56843390997af694b3398be3b. Apr 17 23:39:42.877346 containerd[1456]: time="2026-04-17T23:39:42.877302669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-vwxv5,Uid:39831f24-ec91-448e-be26-c9a7dab0ebb6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"47663a19fedad501a18358aff22e0ed4dc17baa56843390997af694b3398be3b\"" Apr 17 23:39:42.879811 containerd[1456]: time="2026-04-17T23:39:42.879784739Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 17 23:39:43.196040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount137408899.mount: Deactivated successfully. Apr 17 23:39:43.750547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3442165492.mount: Deactivated successfully. Apr 17 23:39:44.780205 containerd[1456]: time="2026-04-17T23:39:44.780152033Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:44.781040 containerd[1456]: time="2026-04-17T23:39:44.780938890Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 17 23:39:44.782721 containerd[1456]: time="2026-04-17T23:39:44.781534968Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:44.783728 containerd[1456]: time="2026-04-17T23:39:44.783264572Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:44.784448 containerd[1456]: time="2026-04-17T23:39:44.784006700Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 1.904192351s" Apr 17 23:39:44.784448 containerd[1456]: time="2026-04-17T23:39:44.784038199Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 17 23:39:44.786582 containerd[1456]: time="2026-04-17T23:39:44.786549271Z" level=info msg="CreateContainer within sandbox \"47663a19fedad501a18358aff22e0ed4dc17baa56843390997af694b3398be3b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 17 23:39:44.796649 containerd[1456]: time="2026-04-17T23:39:44.796616735Z" level=info msg="CreateContainer within sandbox \"47663a19fedad501a18358aff22e0ed4dc17baa56843390997af694b3398be3b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c9aaa4ce019794fb481b62b787c5d402977bd7706b6e43e96298ea770b17105e\"" Apr 17 23:39:44.798102 containerd[1456]: time="2026-04-17T23:39:44.798073470Z" level=info msg="StartContainer for \"c9aaa4ce019794fb481b62b787c5d402977bd7706b6e43e96298ea770b17105e\"" Apr 17 23:39:44.849808 systemd[1]: Started cri-containerd-c9aaa4ce019794fb481b62b787c5d402977bd7706b6e43e96298ea770b17105e.scope - libcontainer container c9aaa4ce019794fb481b62b787c5d402977bd7706b6e43e96298ea770b17105e. Apr 17 23:39:44.879035 containerd[1456]: time="2026-04-17T23:39:44.878998117Z" level=info msg="StartContainer for \"c9aaa4ce019794fb481b62b787c5d402977bd7706b6e43e96298ea770b17105e\" returns successfully" Apr 17 23:39:45.829012 kubelet[2540]: I0417 23:39:45.828966 2540 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-66thh" podStartSLOduration=3.828951697 podStartE2EDuration="3.828951697s" podCreationTimestamp="2026-04-17 23:39:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:39:42.804379809 +0000 UTC m=+6.138277186" watchObservedRunningTime="2026-04-17 23:39:45.828951697 +0000 UTC m=+9.162849064" Apr 17 23:39:47.241779 kubelet[2540]: E0417 23:39:47.241087 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:47.426470 kubelet[2540]: I0417 23:39:47.426114 2540 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-vwxv5" podStartSLOduration=3.5206503590000002 podStartE2EDuration="5.426096885s" podCreationTimestamp="2026-04-17 23:39:42 +0000 UTC" firstStartedPulling="2026-04-17 23:39:42.879131642 +0000 UTC m=+6.213029009" lastFinishedPulling="2026-04-17 23:39:44.784578168 +0000 UTC m=+8.118475535" observedRunningTime="2026-04-17 23:39:45.829973233 +0000 UTC m=+9.163870600" watchObservedRunningTime="2026-04-17 23:39:47.426096885 +0000 UTC m=+10.759994252" Apr 17 23:39:47.723956 kubelet[2540]: E0417 23:39:47.723680 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:47.820815 kubelet[2540]: E0417 23:39:47.820776 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:49.786402 kubelet[2540]: E0417 23:39:49.786357 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:50.221169 sudo[1671]: pam_unix(sudo:session): session closed for user root Apr 17 23:39:50.318882 sshd[1668]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:50.322960 systemd[1]: sshd@6-172.238.189.76:22-50.85.169.122:46702.service: Deactivated successfully. Apr 17 23:39:50.330414 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 23:39:50.331083 systemd[1]: session-7.scope: Consumed 4.646s CPU time, 157.9M memory peak, 0B memory swap peak. Apr 17 23:39:50.335229 systemd-logind[1438]: Session 7 logged out. Waiting for processes to exit. Apr 17 23:39:50.338949 systemd-logind[1438]: Removed session 7. Apr 17 23:39:51.279361 systemd[1]: Created slice kubepods-besteffort-pod9249417e_08df_4bbe_ba99_6af5f5d23352.slice - libcontainer container kubepods-besteffort-pod9249417e_08df_4bbe_ba99_6af5f5d23352.slice. Apr 17 23:39:51.343739 kubelet[2540]: I0417 23:39:51.343570 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9249417e-08df-4bbe-ba99-6af5f5d23352-tigera-ca-bundle\") pod \"calico-typha-76d8549644-bw799\" (UID: \"9249417e-08df-4bbe-ba99-6af5f5d23352\") " pod="calico-system/calico-typha-76d8549644-bw799" Apr 17 23:39:51.343739 kubelet[2540]: I0417 23:39:51.343613 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krtth\" (UniqueName: \"kubernetes.io/projected/9249417e-08df-4bbe-ba99-6af5f5d23352-kube-api-access-krtth\") pod \"calico-typha-76d8549644-bw799\" (UID: \"9249417e-08df-4bbe-ba99-6af5f5d23352\") " pod="calico-system/calico-typha-76d8549644-bw799" Apr 17 23:39:51.343739 kubelet[2540]: I0417 23:39:51.343635 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9249417e-08df-4bbe-ba99-6af5f5d23352-typha-certs\") pod \"calico-typha-76d8549644-bw799\" (UID: \"9249417e-08df-4bbe-ba99-6af5f5d23352\") " pod="calico-system/calico-typha-76d8549644-bw799" Apr 17 23:39:51.375628 systemd[1]: Created slice kubepods-besteffort-pod8b8b92b3_1100_4b20_98e6_d86262eee269.slice - libcontainer container kubepods-besteffort-pod8b8b92b3_1100_4b20_98e6_d86262eee269.slice. Apr 17 23:39:51.444128 kubelet[2540]: I0417 23:39:51.444082 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8b8b92b3-1100-4b20-98e6-d86262eee269-policysync\") pod \"calico-node-bfnsx\" (UID: \"8b8b92b3-1100-4b20-98e6-d86262eee269\") " pod="calico-system/calico-node-bfnsx" Apr 17 23:39:51.444128 kubelet[2540]: I0417 23:39:51.444128 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8b8b92b3-1100-4b20-98e6-d86262eee269-var-run-calico\") pod \"calico-node-bfnsx\" (UID: \"8b8b92b3-1100-4b20-98e6-d86262eee269\") " pod="calico-system/calico-node-bfnsx" Apr 17 23:39:51.444284 kubelet[2540]: I0417 23:39:51.444144 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b8b92b3-1100-4b20-98e6-d86262eee269-xtables-lock\") pod \"calico-node-bfnsx\" (UID: \"8b8b92b3-1100-4b20-98e6-d86262eee269\") " pod="calico-system/calico-node-bfnsx" Apr 17 23:39:51.444284 kubelet[2540]: I0417 23:39:51.444169 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/8b8b92b3-1100-4b20-98e6-d86262eee269-sys-fs\") pod \"calico-node-bfnsx\" (UID: \"8b8b92b3-1100-4b20-98e6-d86262eee269\") " pod="calico-system/calico-node-bfnsx" Apr 17 23:39:51.444284 kubelet[2540]: I0417 23:39:51.444183 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-278c5\" (UniqueName: \"kubernetes.io/projected/8b8b92b3-1100-4b20-98e6-d86262eee269-kube-api-access-278c5\") pod \"calico-node-bfnsx\" (UID: \"8b8b92b3-1100-4b20-98e6-d86262eee269\") " pod="calico-system/calico-node-bfnsx" Apr 17 23:39:51.444284 kubelet[2540]: I0417 23:39:51.444198 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/8b8b92b3-1100-4b20-98e6-d86262eee269-bpffs\") pod \"calico-node-bfnsx\" (UID: \"8b8b92b3-1100-4b20-98e6-d86262eee269\") " pod="calico-system/calico-node-bfnsx" Apr 17 23:39:51.444284 kubelet[2540]: I0417 23:39:51.444213 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b8b92b3-1100-4b20-98e6-d86262eee269-lib-modules\") pod \"calico-node-bfnsx\" (UID: \"8b8b92b3-1100-4b20-98e6-d86262eee269\") " pod="calico-system/calico-node-bfnsx" Apr 17 23:39:51.444430 kubelet[2540]: I0417 23:39:51.444226 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8b8b92b3-1100-4b20-98e6-d86262eee269-node-certs\") pod \"calico-node-bfnsx\" (UID: \"8b8b92b3-1100-4b20-98e6-d86262eee269\") " pod="calico-system/calico-node-bfnsx" Apr 17 23:39:51.444430 kubelet[2540]: I0417 23:39:51.444239 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8b8b92b3-1100-4b20-98e6-d86262eee269-cni-log-dir\") pod \"calico-node-bfnsx\" (UID: \"8b8b92b3-1100-4b20-98e6-d86262eee269\") " pod="calico-system/calico-node-bfnsx" Apr 17 23:39:51.444430 kubelet[2540]: I0417 23:39:51.444252 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8b8b92b3-1100-4b20-98e6-d86262eee269-cni-net-dir\") pod \"calico-node-bfnsx\" (UID: \"8b8b92b3-1100-4b20-98e6-d86262eee269\") " pod="calico-system/calico-node-bfnsx" Apr 17 23:39:51.444430 kubelet[2540]: I0417 23:39:51.444266 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8b8b92b3-1100-4b20-98e6-d86262eee269-cni-bin-dir\") pod \"calico-node-bfnsx\" (UID: \"8b8b92b3-1100-4b20-98e6-d86262eee269\") " pod="calico-system/calico-node-bfnsx" Apr 17 23:39:51.444430 kubelet[2540]: I0417 23:39:51.444281 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b8b92b3-1100-4b20-98e6-d86262eee269-tigera-ca-bundle\") pod \"calico-node-bfnsx\" (UID: \"8b8b92b3-1100-4b20-98e6-d86262eee269\") " pod="calico-system/calico-node-bfnsx" Apr 17 23:39:51.444784 kubelet[2540]: I0417 23:39:51.444294 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8b8b92b3-1100-4b20-98e6-d86262eee269-var-lib-calico\") pod \"calico-node-bfnsx\" (UID: \"8b8b92b3-1100-4b20-98e6-d86262eee269\") " pod="calico-system/calico-node-bfnsx" Apr 17 23:39:51.444784 kubelet[2540]: I0417 23:39:51.444308 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8b8b92b3-1100-4b20-98e6-d86262eee269-flexvol-driver-host\") pod \"calico-node-bfnsx\" (UID: \"8b8b92b3-1100-4b20-98e6-d86262eee269\") " pod="calico-system/calico-node-bfnsx" Apr 17 23:39:51.444784 kubelet[2540]: I0417 23:39:51.444322 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/8b8b92b3-1100-4b20-98e6-d86262eee269-nodeproc\") pod \"calico-node-bfnsx\" (UID: \"8b8b92b3-1100-4b20-98e6-d86262eee269\") " pod="calico-system/calico-node-bfnsx" Apr 17 23:39:51.485887 kubelet[2540]: E0417 23:39:51.485842 2540 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9m4mr" podUID="7519db54-398f-4489-8839-90013af059d5" Apr 17 23:39:51.545417 kubelet[2540]: I0417 23:39:51.544487 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7519db54-398f-4489-8839-90013af059d5-kubelet-dir\") pod \"csi-node-driver-9m4mr\" (UID: \"7519db54-398f-4489-8839-90013af059d5\") " pod="calico-system/csi-node-driver-9m4mr" Apr 17 23:39:51.545417 kubelet[2540]: I0417 23:39:51.544528 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjzbn\" (UniqueName: \"kubernetes.io/projected/7519db54-398f-4489-8839-90013af059d5-kube-api-access-hjzbn\") pod \"csi-node-driver-9m4mr\" (UID: \"7519db54-398f-4489-8839-90013af059d5\") " pod="calico-system/csi-node-driver-9m4mr" Apr 17 23:39:51.545417 kubelet[2540]: I0417 23:39:51.544606 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7519db54-398f-4489-8839-90013af059d5-varrun\") pod \"csi-node-driver-9m4mr\" (UID: \"7519db54-398f-4489-8839-90013af059d5\") " pod="calico-system/csi-node-driver-9m4mr" Apr 17 23:39:51.545417 kubelet[2540]: I0417 23:39:51.544641 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7519db54-398f-4489-8839-90013af059d5-registration-dir\") pod \"csi-node-driver-9m4mr\" (UID: \"7519db54-398f-4489-8839-90013af059d5\") " pod="calico-system/csi-node-driver-9m4mr" Apr 17 23:39:51.545417 kubelet[2540]: I0417 23:39:51.544659 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7519db54-398f-4489-8839-90013af059d5-socket-dir\") pod \"csi-node-driver-9m4mr\" (UID: \"7519db54-398f-4489-8839-90013af059d5\") " pod="calico-system/csi-node-driver-9m4mr" Apr 17 23:39:51.546988 kubelet[2540]: E0417 23:39:51.546461 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.546988 kubelet[2540]: W0417 23:39:51.546476 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.546988 kubelet[2540]: E0417 23:39:51.546492 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.546988 kubelet[2540]: E0417 23:39:51.546921 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.546988 kubelet[2540]: W0417 23:39:51.546929 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.546988 kubelet[2540]: E0417 23:39:51.546939 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.547140 kubelet[2540]: E0417 23:39:51.547130 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.547140 kubelet[2540]: W0417 23:39:51.547138 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.547181 kubelet[2540]: E0417 23:39:51.547147 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.547381 kubelet[2540]: E0417 23:39:51.547364 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.547460 kubelet[2540]: W0417 23:39:51.547389 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.547460 kubelet[2540]: E0417 23:39:51.547400 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.547903 kubelet[2540]: E0417 23:39:51.547891 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.547903 kubelet[2540]: W0417 23:39:51.547903 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.547995 kubelet[2540]: E0417 23:39:51.547912 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.548183 kubelet[2540]: E0417 23:39:51.548171 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.548183 kubelet[2540]: W0417 23:39:51.548181 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.548393 kubelet[2540]: E0417 23:39:51.548190 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.548421 kubelet[2540]: E0417 23:39:51.548413 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.548447 kubelet[2540]: W0417 23:39:51.548421 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.548447 kubelet[2540]: E0417 23:39:51.548443 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.548688 kubelet[2540]: E0417 23:39:51.548675 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.548688 kubelet[2540]: W0417 23:39:51.548686 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.548769 kubelet[2540]: E0417 23:39:51.548733 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.548978 kubelet[2540]: E0417 23:39:51.548966 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.548978 kubelet[2540]: W0417 23:39:51.548976 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.549252 kubelet[2540]: E0417 23:39:51.548985 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.549433 kubelet[2540]: E0417 23:39:51.549421 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.549433 kubelet[2540]: W0417 23:39:51.549432 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.549531 kubelet[2540]: E0417 23:39:51.549440 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.549727 kubelet[2540]: E0417 23:39:51.549713 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.549727 kubelet[2540]: W0417 23:39:51.549724 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.549820 kubelet[2540]: E0417 23:39:51.549733 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.550188 kubelet[2540]: E0417 23:39:51.550176 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.550188 kubelet[2540]: W0417 23:39:51.550186 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.550336 kubelet[2540]: E0417 23:39:51.550194 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.550408 kubelet[2540]: E0417 23:39:51.550396 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.550408 kubelet[2540]: W0417 23:39:51.550406 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.550488 kubelet[2540]: E0417 23:39:51.550414 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.550676 kubelet[2540]: E0417 23:39:51.550664 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.550676 kubelet[2540]: W0417 23:39:51.550674 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.550806 kubelet[2540]: E0417 23:39:51.550682 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.551060 kubelet[2540]: E0417 23:39:51.551048 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.551060 kubelet[2540]: W0417 23:39:51.551058 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.551104 kubelet[2540]: E0417 23:39:51.551067 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.551317 kubelet[2540]: E0417 23:39:51.551293 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.551355 kubelet[2540]: W0417 23:39:51.551317 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.551355 kubelet[2540]: E0417 23:39:51.551325 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.551564 kubelet[2540]: E0417 23:39:51.551549 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.551564 kubelet[2540]: W0417 23:39:51.551562 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.551637 kubelet[2540]: E0417 23:39:51.551572 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.551855 kubelet[2540]: E0417 23:39:51.551842 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.551855 kubelet[2540]: W0417 23:39:51.551854 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.551855 kubelet[2540]: E0417 23:39:51.551863 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.552143 kubelet[2540]: E0417 23:39:51.552131 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.552143 kubelet[2540]: W0417 23:39:51.552141 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.552327 kubelet[2540]: E0417 23:39:51.552149 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.552362 kubelet[2540]: E0417 23:39:51.552353 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.552362 kubelet[2540]: W0417 23:39:51.552361 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.552408 kubelet[2540]: E0417 23:39:51.552369 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.552650 kubelet[2540]: E0417 23:39:51.552638 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.552650 kubelet[2540]: W0417 23:39:51.552649 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.553142 kubelet[2540]: E0417 23:39:51.552656 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.553201 kubelet[2540]: E0417 23:39:51.553177 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.553201 kubelet[2540]: W0417 23:39:51.553193 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.553253 kubelet[2540]: E0417 23:39:51.553202 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.554037 kubelet[2540]: E0417 23:39:51.553925 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.554037 kubelet[2540]: W0417 23:39:51.553937 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.554037 kubelet[2540]: E0417 23:39:51.553947 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.554280 kubelet[2540]: E0417 23:39:51.554167 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.554280 kubelet[2540]: W0417 23:39:51.554177 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.554280 kubelet[2540]: E0417 23:39:51.554186 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.554408 kubelet[2540]: E0417 23:39:51.554397 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.554458 kubelet[2540]: W0417 23:39:51.554448 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.554504 kubelet[2540]: E0417 23:39:51.554495 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.554956 kubelet[2540]: E0417 23:39:51.554945 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.555022 kubelet[2540]: W0417 23:39:51.555011 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.555064 kubelet[2540]: E0417 23:39:51.555055 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.556820 kubelet[2540]: E0417 23:39:51.555751 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.556820 kubelet[2540]: W0417 23:39:51.555762 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.556820 kubelet[2540]: E0417 23:39:51.555771 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.556820 kubelet[2540]: E0417 23:39:51.555976 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.556820 kubelet[2540]: W0417 23:39:51.555984 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.556820 kubelet[2540]: E0417 23:39:51.555992 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.556820 kubelet[2540]: E0417 23:39:51.556185 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.556820 kubelet[2540]: W0417 23:39:51.556194 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.556820 kubelet[2540]: E0417 23:39:51.556202 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.563245 kubelet[2540]: E0417 23:39:51.563231 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.563334 kubelet[2540]: W0417 23:39:51.563310 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.563426 kubelet[2540]: E0417 23:39:51.563414 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.571005 update_engine[1439]: I20260417 23:39:51.570971 1439 update_attempter.cc:509] Updating boot flags... Apr 17 23:39:51.571961 kubelet[2540]: E0417 23:39:51.571948 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.572063 kubelet[2540]: W0417 23:39:51.572050 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.572456 kubelet[2540]: E0417 23:39:51.572431 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.586528 kubelet[2540]: E0417 23:39:51.586505 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:51.587221 containerd[1456]: time="2026-04-17T23:39:51.586888259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76d8549644-bw799,Uid:9249417e-08df-4bbe-ba99-6af5f5d23352,Namespace:calico-system,Attempt:0,}" Apr 17 23:39:51.620857 containerd[1456]: time="2026-04-17T23:39:51.620345204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:51.620857 containerd[1456]: time="2026-04-17T23:39:51.620387054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:51.620857 containerd[1456]: time="2026-04-17T23:39:51.620400534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:51.620857 containerd[1456]: time="2026-04-17T23:39:51.620469814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:51.646961 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (2998) Apr 17 23:39:51.648477 kubelet[2540]: E0417 23:39:51.648028 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.648477 kubelet[2540]: W0417 23:39:51.648043 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.648477 kubelet[2540]: E0417 23:39:51.648059 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.648477 kubelet[2540]: E0417 23:39:51.648332 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.648477 kubelet[2540]: W0417 23:39:51.648340 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.648477 kubelet[2540]: E0417 23:39:51.648349 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.649475 kubelet[2540]: E0417 23:39:51.649463 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.649557 kubelet[2540]: W0417 23:39:51.649529 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.649557 kubelet[2540]: E0417 23:39:51.649543 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.650554 kubelet[2540]: E0417 23:39:51.649888 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.650554 kubelet[2540]: W0417 23:39:51.649898 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.650554 kubelet[2540]: E0417 23:39:51.649908 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.650554 kubelet[2540]: E0417 23:39:51.650212 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.650554 kubelet[2540]: W0417 23:39:51.650221 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.650554 kubelet[2540]: E0417 23:39:51.650229 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.650554 kubelet[2540]: E0417 23:39:51.650526 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.650554 kubelet[2540]: W0417 23:39:51.650534 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.650554 kubelet[2540]: E0417 23:39:51.650542 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.651014 kubelet[2540]: E0417 23:39:51.651003 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.651085 kubelet[2540]: W0417 23:39:51.651073 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.651136 kubelet[2540]: E0417 23:39:51.651126 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.651531 kubelet[2540]: E0417 23:39:51.651521 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.654119 kubelet[2540]: W0417 23:39:51.651897 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.654119 kubelet[2540]: E0417 23:39:51.651912 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.654119 kubelet[2540]: E0417 23:39:51.652337 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.654119 kubelet[2540]: W0417 23:39:51.652346 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.654119 kubelet[2540]: E0417 23:39:51.652355 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.654119 kubelet[2540]: E0417 23:39:51.652643 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.654119 kubelet[2540]: W0417 23:39:51.652651 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.654119 kubelet[2540]: E0417 23:39:51.652659 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.654119 kubelet[2540]: E0417 23:39:51.653213 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.654119 kubelet[2540]: W0417 23:39:51.653222 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.654332 kubelet[2540]: E0417 23:39:51.653231 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.654490 kubelet[2540]: E0417 23:39:51.654391 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.654490 kubelet[2540]: W0417 23:39:51.654404 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.654490 kubelet[2540]: E0417 23:39:51.654415 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.655202 kubelet[2540]: E0417 23:39:51.655182 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.656180 kubelet[2540]: W0417 23:39:51.655427 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.656180 kubelet[2540]: E0417 23:39:51.655508 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.656180 kubelet[2540]: E0417 23:39:51.655938 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.656180 kubelet[2540]: W0417 23:39:51.655947 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.656180 kubelet[2540]: E0417 23:39:51.655956 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.658048 kubelet[2540]: E0417 23:39:51.657026 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.658048 kubelet[2540]: W0417 23:39:51.657037 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.658048 kubelet[2540]: E0417 23:39:51.657049 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.658048 kubelet[2540]: E0417 23:39:51.657348 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.658048 kubelet[2540]: W0417 23:39:51.657357 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.658048 kubelet[2540]: E0417 23:39:51.657368 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.658048 kubelet[2540]: E0417 23:39:51.657881 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.658048 kubelet[2540]: W0417 23:39:51.657889 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.658048 kubelet[2540]: E0417 23:39:51.657898 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.658505 kubelet[2540]: E0417 23:39:51.658275 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.658505 kubelet[2540]: W0417 23:39:51.658286 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.658505 kubelet[2540]: E0417 23:39:51.658294 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.658834 kubelet[2540]: E0417 23:39:51.658822 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.658891 kubelet[2540]: W0417 23:39:51.658881 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.658938 kubelet[2540]: E0417 23:39:51.658929 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.659284 kubelet[2540]: E0417 23:39:51.659164 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.659284 kubelet[2540]: W0417 23:39:51.659185 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.659284 kubelet[2540]: E0417 23:39:51.659194 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.659504 kubelet[2540]: E0417 23:39:51.659492 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.659550 kubelet[2540]: W0417 23:39:51.659540 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.659590 kubelet[2540]: E0417 23:39:51.659581 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.660006 kubelet[2540]: E0417 23:39:51.659995 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.660069 kubelet[2540]: W0417 23:39:51.660058 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.660111 kubelet[2540]: E0417 23:39:51.660102 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.660376 kubelet[2540]: E0417 23:39:51.660365 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.660425 kubelet[2540]: W0417 23:39:51.660416 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.660490 kubelet[2540]: E0417 23:39:51.660456 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.661904 systemd[1]: Started cri-containerd-f802a8a408f951ddc356efba8f55968771261d8df4f838cab68c6f3b8c3ef60e.scope - libcontainer container f802a8a408f951ddc356efba8f55968771261d8df4f838cab68c6f3b8c3ef60e. Apr 17 23:39:51.663961 kubelet[2540]: E0417 23:39:51.663880 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.664392 kubelet[2540]: W0417 23:39:51.664342 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.664476 kubelet[2540]: E0417 23:39:51.664439 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.671315 kubelet[2540]: E0417 23:39:51.669481 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.674976 kubelet[2540]: W0417 23:39:51.674308 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.679200 kubelet[2540]: E0417 23:39:51.679098 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.687897 kubelet[2540]: E0417 23:39:51.684875 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:51.687897 kubelet[2540]: W0417 23:39:51.684909 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:51.687897 kubelet[2540]: E0417 23:39:51.684921 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:51.692141 containerd[1456]: time="2026-04-17T23:39:51.692112844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bfnsx,Uid:8b8b92b3-1100-4b20-98e6-d86262eee269,Namespace:calico-system,Attempt:0,}" Apr 17 23:39:51.750623 containerd[1456]: time="2026-04-17T23:39:51.749855706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:51.750623 containerd[1456]: time="2026-04-17T23:39:51.749902436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:51.750623 containerd[1456]: time="2026-04-17T23:39:51.749916346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:51.750623 containerd[1456]: time="2026-04-17T23:39:51.750127205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:51.765726 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (2998) Apr 17 23:39:51.790955 containerd[1456]: time="2026-04-17T23:39:51.790830885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76d8549644-bw799,Uid:9249417e-08df-4bbe-ba99-6af5f5d23352,Namespace:calico-system,Attempt:0,} returns sandbox id \"f802a8a408f951ddc356efba8f55968771261d8df4f838cab68c6f3b8c3ef60e\"" Apr 17 23:39:51.792999 kubelet[2540]: E0417 23:39:51.792975 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:51.795073 containerd[1456]: time="2026-04-17T23:39:51.795052335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 17 23:39:51.829827 systemd[1]: Started cri-containerd-31f79d66d9eb777c0521a08b0299608c1710b0c930cb4e05d42d194255848707.scope - libcontainer container 31f79d66d9eb777c0521a08b0299608c1710b0c930cb4e05d42d194255848707. Apr 17 23:39:51.871361 containerd[1456]: time="2026-04-17T23:39:51.871313305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bfnsx,Uid:8b8b92b3-1100-4b20-98e6-d86262eee269,Namespace:calico-system,Attempt:0,} returns sandbox id \"31f79d66d9eb777c0521a08b0299608c1710b0c930cb4e05d42d194255848707\"" Apr 17 23:39:51.891740 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (2998) Apr 17 23:39:53.122461 containerd[1456]: time="2026-04-17T23:39:53.122417384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:53.123324 containerd[1456]: time="2026-04-17T23:39:53.123253912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 17 23:39:53.124782 containerd[1456]: time="2026-04-17T23:39:53.123837001Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:53.126065 containerd[1456]: time="2026-04-17T23:39:53.126038586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:53.126993 containerd[1456]: time="2026-04-17T23:39:53.126939085Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.331561601s" Apr 17 23:39:53.127093 containerd[1456]: time="2026-04-17T23:39:53.127076534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 17 23:39:53.128993 containerd[1456]: time="2026-04-17T23:39:53.128969751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 17 23:39:53.140896 containerd[1456]: time="2026-04-17T23:39:53.140866787Z" level=info msg="CreateContainer within sandbox \"f802a8a408f951ddc356efba8f55968771261d8df4f838cab68c6f3b8c3ef60e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 17 23:39:53.158846 containerd[1456]: time="2026-04-17T23:39:53.158812552Z" level=info msg="CreateContainer within sandbox \"f802a8a408f951ddc356efba8f55968771261d8df4f838cab68c6f3b8c3ef60e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d3c372ba61141e666dd3c566f024d746c46f3be94fd3ff3c0dc5372be5b63d3c\"" Apr 17 23:39:53.159905 containerd[1456]: time="2026-04-17T23:39:53.159376571Z" level=info msg="StartContainer for \"d3c372ba61141e666dd3c566f024d746c46f3be94fd3ff3c0dc5372be5b63d3c\"" Apr 17 23:39:53.194862 systemd[1]: Started cri-containerd-d3c372ba61141e666dd3c566f024d746c46f3be94fd3ff3c0dc5372be5b63d3c.scope - libcontainer container d3c372ba61141e666dd3c566f024d746c46f3be94fd3ff3c0dc5372be5b63d3c. Apr 17 23:39:53.234092 containerd[1456]: time="2026-04-17T23:39:53.234065085Z" level=info msg="StartContainer for \"d3c372ba61141e666dd3c566f024d746c46f3be94fd3ff3c0dc5372be5b63d3c\" returns successfully" Apr 17 23:39:53.452119 systemd[1]: run-containerd-runc-k8s.io-d3c372ba61141e666dd3c566f024d746c46f3be94fd3ff3c0dc5372be5b63d3c-runc.Cbea4a.mount: Deactivated successfully. Apr 17 23:39:53.758978 containerd[1456]: time="2026-04-17T23:39:53.758204630Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 17 23:39:53.758978 containerd[1456]: time="2026-04-17T23:39:53.758327310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:53.760221 containerd[1456]: time="2026-04-17T23:39:53.760177276Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:53.761752 containerd[1456]: time="2026-04-17T23:39:53.760875075Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 631.878854ms" Apr 17 23:39:53.761752 containerd[1456]: time="2026-04-17T23:39:53.760908435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 17 23:39:53.761752 containerd[1456]: time="2026-04-17T23:39:53.761352914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:53.764853 kubelet[2540]: E0417 23:39:53.764815 2540 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9m4mr" podUID="7519db54-398f-4489-8839-90013af059d5" Apr 17 23:39:53.765961 containerd[1456]: time="2026-04-17T23:39:53.765930415Z" level=info msg="CreateContainer within sandbox \"31f79d66d9eb777c0521a08b0299608c1710b0c930cb4e05d42d194255848707\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 17 23:39:53.779616 containerd[1456]: time="2026-04-17T23:39:53.779581358Z" level=info msg="CreateContainer within sandbox \"31f79d66d9eb777c0521a08b0299608c1710b0c930cb4e05d42d194255848707\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"75f3b83cb0ff758b556e68f0c912d4edb961fc6141667533d0548af12ad570c6\"" Apr 17 23:39:53.780399 containerd[1456]: time="2026-04-17T23:39:53.780371357Z" level=info msg="StartContainer for \"75f3b83cb0ff758b556e68f0c912d4edb961fc6141667533d0548af12ad570c6\"" Apr 17 23:39:53.819862 systemd[1]: Started cri-containerd-75f3b83cb0ff758b556e68f0c912d4edb961fc6141667533d0548af12ad570c6.scope - libcontainer container 75f3b83cb0ff758b556e68f0c912d4edb961fc6141667533d0548af12ad570c6. Apr 17 23:39:53.846496 kubelet[2540]: E0417 23:39:53.846454 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:53.851387 kubelet[2540]: E0417 23:39:53.851232 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:53.851387 kubelet[2540]: W0417 23:39:53.851265 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:53.851387 kubelet[2540]: E0417 23:39:53.851287 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:53.851598 kubelet[2540]: E0417 23:39:53.851586 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:53.851805 kubelet[2540]: W0417 23:39:53.851716 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:53.851805 kubelet[2540]: E0417 23:39:53.851731 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:53.853197 kubelet[2540]: E0417 23:39:53.853151 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:53.853520 kubelet[2540]: W0417 23:39:53.853457 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:53.853520 kubelet[2540]: E0417 23:39:53.853471 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:53.854251 kubelet[2540]: E0417 23:39:53.854141 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:53.854251 kubelet[2540]: W0417 23:39:53.854187 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:53.854251 kubelet[2540]: E0417 23:39:53.854197 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:53.856014 containerd[1456]: time="2026-04-17T23:39:53.855762269Z" level=info msg="StartContainer for \"75f3b83cb0ff758b556e68f0c912d4edb961fc6141667533d0548af12ad570c6\" returns successfully" Apr 17 23:39:53.856070 kubelet[2540]: E0417 23:39:53.855899 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:53.856070 kubelet[2540]: W0417 23:39:53.855907 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:53.856070 kubelet[2540]: E0417 23:39:53.855932 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:53.856475 kubelet[2540]: E0417 23:39:53.856341 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:53.856475 kubelet[2540]: W0417 23:39:53.856353 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:53.856475 kubelet[2540]: E0417 23:39:53.856361 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:53.856790 kubelet[2540]: E0417 23:39:53.856779 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:53.856899 kubelet[2540]: W0417 23:39:53.856852 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:53.856899 kubelet[2540]: E0417 23:39:53.856864 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:53.857309 kubelet[2540]: E0417 23:39:53.857201 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:53.857309 kubelet[2540]: W0417 23:39:53.857234 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:53.857309 kubelet[2540]: E0417 23:39:53.857243 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:53.857815 kubelet[2540]: E0417 23:39:53.857663 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:53.857815 kubelet[2540]: W0417 23:39:53.857673 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:53.857815 kubelet[2540]: E0417 23:39:53.857683 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:53.858236 kubelet[2540]: E0417 23:39:53.858116 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:53.858236 kubelet[2540]: W0417 23:39:53.858141 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:53.858236 kubelet[2540]: E0417 23:39:53.858149 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:53.858488 kubelet[2540]: E0417 23:39:53.858452 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:53.858568 kubelet[2540]: W0417 23:39:53.858527 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:53.858646 kubelet[2540]: E0417 23:39:53.858539 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:53.859013 kubelet[2540]: E0417 23:39:53.858979 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:53.859135 kubelet[2540]: W0417 23:39:53.859070 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:53.859135 kubelet[2540]: E0417 23:39:53.859082 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:53.859533 kubelet[2540]: E0417 23:39:53.859473 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:53.859533 kubelet[2540]: W0417 23:39:53.859483 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:53.859533 kubelet[2540]: E0417 23:39:53.859491 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:53.860077 kubelet[2540]: E0417 23:39:53.860065 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:53.860222 kubelet[2540]: W0417 23:39:53.860138 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:53.860222 kubelet[2540]: E0417 23:39:53.860150 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:53.860791 kubelet[2540]: E0417 23:39:53.860541 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:53.860791 kubelet[2540]: W0417 23:39:53.860551 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:53.860907 kubelet[2540]: E0417 23:39:53.860882 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:53.881298 kubelet[2540]: E0417 23:39:53.880410 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:53.881298 kubelet[2540]: W0417 23:39:53.880431 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:53.881298 kubelet[2540]: E0417 23:39:53.880447 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:53.881841 kubelet[2540]: E0417 23:39:53.881820 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:53.881841 kubelet[2540]: W0417 23:39:53.881837 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:53.881912 kubelet[2540]: E0417 23:39:53.881848 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:53.882814 kubelet[2540]: E0417 23:39:53.882785 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:53.882814 kubelet[2540]: W0417 23:39:53.882801 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:53.882814 kubelet[2540]: E0417 23:39:53.882811 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:53.883202 kubelet[2540]: E0417 23:39:53.883180 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:53.883202 kubelet[2540]: W0417 23:39:53.883198 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:53.883274 kubelet[2540]: E0417 23:39:53.883208 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:53.884354 kubelet[2540]: E0417 23:39:53.884327 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:53.884354 kubelet[2540]: W0417 23:39:53.884342 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:53.884354 kubelet[2540]: E0417 23:39:53.884353 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:53.885729 kubelet[2540]: E0417 23:39:53.885711 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:53.885729 kubelet[2540]: W0417 23:39:53.885725 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:53.885729 kubelet[2540]: E0417 23:39:53.885928 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:53.886263 kubelet[2540]: E0417 23:39:53.886226 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:53.886263 kubelet[2540]: W0417 23:39:53.886243 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:53.886263 kubelet[2540]: E0417 23:39:53.886253 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:53.887897 kubelet[2540]: E0417 23:39:53.887870 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:53.887897 kubelet[2540]: W0417 23:39:53.887889 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:53.887897 kubelet[2540]: E0417 23:39:53.887899 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:53.888219 kubelet[2540]: E0417 23:39:53.888196 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:53.888219 kubelet[2540]: W0417 23:39:53.888214 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:53.888290 kubelet[2540]: E0417 23:39:53.888223 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:53.888503 kubelet[2540]: E0417 23:39:53.888479 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:53.888503 kubelet[2540]: W0417 23:39:53.888496 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:53.888503 kubelet[2540]: E0417 23:39:53.888505 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:53.889343 kubelet[2540]: E0417 23:39:53.888773 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:53.889343 kubelet[2540]: W0417 23:39:53.888785 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:53.889343 kubelet[2540]: E0417 23:39:53.888794 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:53.889763 kubelet[2540]: E0417 23:39:53.889740 2540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:53.889861 kubelet[2540]: W0417 23:39:53.889758 2540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:53.889891 kubelet[2540]: E0417 23:39:53.889866 2540 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:53.892880 systemd[1]: cri-containerd-75f3b83cb0ff758b556e68f0c912d4edb961fc6141667533d0548af12ad570c6.scope: Deactivated successfully. Apr 17 23:39:53.924201 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75f3b83cb0ff758b556e68f0c912d4edb961fc6141667533d0548af12ad570c6-rootfs.mount: Deactivated successfully. Apr 17 23:39:54.013878 containerd[1456]: time="2026-04-17T23:39:54.013743392Z" level=info msg="shim disconnected" id=75f3b83cb0ff758b556e68f0c912d4edb961fc6141667533d0548af12ad570c6 namespace=k8s.io Apr 17 23:39:54.013878 containerd[1456]: time="2026-04-17T23:39:54.013812772Z" level=warning msg="cleaning up after shim disconnected" id=75f3b83cb0ff758b556e68f0c912d4edb961fc6141667533d0548af12ad570c6 namespace=k8s.io Apr 17 23:39:54.013878 containerd[1456]: time="2026-04-17T23:39:54.013822112Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:39:54.852132 kubelet[2540]: I0417 23:39:54.852076 2540 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:39:54.852558 kubelet[2540]: E0417 23:39:54.852364 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:39:54.854184 containerd[1456]: time="2026-04-17T23:39:54.853952482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 17 23:39:54.867525 kubelet[2540]: I0417 23:39:54.867048 2540 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-76d8549644-bw799" podStartSLOduration=2.533502961 podStartE2EDuration="3.867034578s" podCreationTimestamp="2026-04-17 23:39:51 +0000 UTC" firstStartedPulling="2026-04-17 23:39:51.794489456 +0000 UTC m=+15.128386833" lastFinishedPulling="2026-04-17 23:39:53.128021083 +0000 UTC m=+16.461918450" observedRunningTime="2026-04-17 23:39:53.866024469 +0000 UTC m=+17.199921836" watchObservedRunningTime="2026-04-17 23:39:54.867034578 +0000 UTC m=+18.200931945" Apr 17 23:39:55.765729 kubelet[2540]: E0417 23:39:55.764395 2540 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9m4mr" podUID="7519db54-398f-4489-8839-90013af059d5" Apr 17 23:39:57.764406 kubelet[2540]: E0417 23:39:57.764339 2540 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9m4mr" podUID="7519db54-398f-4489-8839-90013af059d5" Apr 17 23:39:58.495948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3252759074.mount: Deactivated successfully. Apr 17 23:39:58.525176 containerd[1456]: time="2026-04-17T23:39:58.525118068Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:58.525958 containerd[1456]: time="2026-04-17T23:39:58.525910197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 17 23:39:58.526729 containerd[1456]: time="2026-04-17T23:39:58.526394746Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:58.528151 containerd[1456]: time="2026-04-17T23:39:58.528110264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:58.529570 containerd[1456]: time="2026-04-17T23:39:58.528762583Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 3.674777561s" Apr 17 23:39:58.529570 containerd[1456]: time="2026-04-17T23:39:58.528799453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 17 23:39:58.533808 containerd[1456]: time="2026-04-17T23:39:58.533764376Z" level=info msg="CreateContainer within sandbox \"31f79d66d9eb777c0521a08b0299608c1710b0c930cb4e05d42d194255848707\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 17 23:39:58.551153 containerd[1456]: time="2026-04-17T23:39:58.551102371Z" level=info msg="CreateContainer within sandbox \"31f79d66d9eb777c0521a08b0299608c1710b0c930cb4e05d42d194255848707\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"52b4bbb067fcebc617e86c80f5206d283fb023beb4617a7262b7601b22c20a7a\"" Apr 17 23:39:58.551948 containerd[1456]: time="2026-04-17T23:39:58.551818890Z" level=info msg="StartContainer for \"52b4bbb067fcebc617e86c80f5206d283fb023beb4617a7262b7601b22c20a7a\"" Apr 17 23:39:58.593206 systemd[1]: Started cri-containerd-52b4bbb067fcebc617e86c80f5206d283fb023beb4617a7262b7601b22c20a7a.scope - libcontainer container 52b4bbb067fcebc617e86c80f5206d283fb023beb4617a7262b7601b22c20a7a. Apr 17 23:39:58.632102 containerd[1456]: time="2026-04-17T23:39:58.632058696Z" level=info msg="StartContainer for \"52b4bbb067fcebc617e86c80f5206d283fb023beb4617a7262b7601b22c20a7a\" returns successfully" Apr 17 23:39:58.679304 systemd[1]: cri-containerd-52b4bbb067fcebc617e86c80f5206d283fb023beb4617a7262b7601b22c20a7a.scope: Deactivated successfully. Apr 17 23:39:58.835280 containerd[1456]: time="2026-04-17T23:39:58.835063049Z" level=info msg="shim disconnected" id=52b4bbb067fcebc617e86c80f5206d283fb023beb4617a7262b7601b22c20a7a namespace=k8s.io Apr 17 23:39:58.835280 containerd[1456]: time="2026-04-17T23:39:58.835110979Z" level=warning msg="cleaning up after shim disconnected" id=52b4bbb067fcebc617e86c80f5206d283fb023beb4617a7262b7601b22c20a7a namespace=k8s.io Apr 17 23:39:58.835280 containerd[1456]: time="2026-04-17T23:39:58.835119659Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:39:58.867636 containerd[1456]: time="2026-04-17T23:39:58.867217673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 17 23:39:59.495823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52b4bbb067fcebc617e86c80f5206d283fb023beb4617a7262b7601b22c20a7a-rootfs.mount: Deactivated successfully. Apr 17 23:39:59.764123 kubelet[2540]: E0417 23:39:59.764004 2540 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9m4mr" podUID="7519db54-398f-4489-8839-90013af059d5" Apr 17 23:40:00.432994 containerd[1456]: time="2026-04-17T23:40:00.432924399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:00.433838 containerd[1456]: time="2026-04-17T23:40:00.433687908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 17 23:40:00.434754 containerd[1456]: time="2026-04-17T23:40:00.434238017Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:00.436414 containerd[1456]: time="2026-04-17T23:40:00.436121265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:00.436876 containerd[1456]: time="2026-04-17T23:40:00.436846264Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 1.569591321s" Apr 17 23:40:00.436921 containerd[1456]: time="2026-04-17T23:40:00.436874234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 17 23:40:00.444416 containerd[1456]: time="2026-04-17T23:40:00.444381615Z" level=info msg="CreateContainer within sandbox \"31f79d66d9eb777c0521a08b0299608c1710b0c930cb4e05d42d194255848707\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 17 23:40:00.468237 containerd[1456]: time="2026-04-17T23:40:00.468205505Z" level=info msg="CreateContainer within sandbox \"31f79d66d9eb777c0521a08b0299608c1710b0c930cb4e05d42d194255848707\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"77eef4b78927d156460fa168d2b236666145ef3538abe34657ee3bb72db16f83\"" Apr 17 23:40:00.469042 containerd[1456]: time="2026-04-17T23:40:00.468916934Z" level=info msg="StartContainer for \"77eef4b78927d156460fa168d2b236666145ef3538abe34657ee3bb72db16f83\"" Apr 17 23:40:00.502128 systemd[1]: run-containerd-runc-k8s.io-77eef4b78927d156460fa168d2b236666145ef3538abe34657ee3bb72db16f83-runc.RVSSCH.mount: Deactivated successfully. Apr 17 23:40:00.511822 systemd[1]: Started cri-containerd-77eef4b78927d156460fa168d2b236666145ef3538abe34657ee3bb72db16f83.scope - libcontainer container 77eef4b78927d156460fa168d2b236666145ef3538abe34657ee3bb72db16f83. Apr 17 23:40:00.542803 containerd[1456]: time="2026-04-17T23:40:00.542763262Z" level=info msg="StartContainer for \"77eef4b78927d156460fa168d2b236666145ef3538abe34657ee3bb72db16f83\" returns successfully" Apr 17 23:40:01.084879 containerd[1456]: time="2026-04-17T23:40:01.084803704Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:40:01.087671 systemd[1]: cri-containerd-77eef4b78927d156460fa168d2b236666145ef3538abe34657ee3bb72db16f83.scope: Deactivated successfully. Apr 17 23:40:01.114001 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77eef4b78927d156460fa168d2b236666145ef3538abe34657ee3bb72db16f83-rootfs.mount: Deactivated successfully. Apr 17 23:40:01.132986 kubelet[2540]: I0417 23:40:01.132960 2540 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 17 23:40:01.156748 containerd[1456]: time="2026-04-17T23:40:01.154661252Z" level=info msg="shim disconnected" id=77eef4b78927d156460fa168d2b236666145ef3538abe34657ee3bb72db16f83 namespace=k8s.io Apr 17 23:40:01.156748 containerd[1456]: time="2026-04-17T23:40:01.156731640Z" level=warning msg="cleaning up after shim disconnected" id=77eef4b78927d156460fa168d2b236666145ef3538abe34657ee3bb72db16f83 namespace=k8s.io Apr 17 23:40:01.156748 containerd[1456]: time="2026-04-17T23:40:01.156747460Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:40:01.211515 systemd[1]: Created slice kubepods-burstable-pod3b2079f5_4ea9_4796_8181_3d79d0da7db2.slice - libcontainer container kubepods-burstable-pod3b2079f5_4ea9_4796_8181_3d79d0da7db2.slice. Apr 17 23:40:01.223293 systemd[1]: Created slice kubepods-besteffort-poda9d870f4_95e3_4941_9a77_e1b80afca9bd.slice - libcontainer container kubepods-besteffort-poda9d870f4_95e3_4941_9a77_e1b80afca9bd.slice. Apr 17 23:40:01.231525 systemd[1]: Created slice kubepods-besteffort-pode92690a1_a6c6_4a96_8b33_2a2ebd323317.slice - libcontainer container kubepods-besteffort-pode92690a1_a6c6_4a96_8b33_2a2ebd323317.slice. Apr 17 23:40:01.235568 kubelet[2540]: I0417 23:40:01.234793 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dn8c\" (UniqueName: \"kubernetes.io/projected/98a1a605-7c5c-4f5c-801d-322ef1144e09-kube-api-access-6dn8c\") pod \"coredns-66bc5c9577-lh2x8\" (UID: \"98a1a605-7c5c-4f5c-801d-322ef1144e09\") " pod="kube-system/coredns-66bc5c9577-lh2x8" Apr 17 23:40:01.235568 kubelet[2540]: I0417 23:40:01.235025 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a9d870f4-95e3-4941-9a77-e1b80afca9bd-calico-apiserver-certs\") pod \"calico-apiserver-747d4d9564-p57qj\" (UID: \"a9d870f4-95e3-4941-9a77-e1b80afca9bd\") " pod="calico-system/calico-apiserver-747d4d9564-p57qj" Apr 17 23:40:01.235568 kubelet[2540]: I0417 23:40:01.235045 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk9x6\" (UniqueName: \"kubernetes.io/projected/a9d870f4-95e3-4941-9a77-e1b80afca9bd-kube-api-access-rk9x6\") pod \"calico-apiserver-747d4d9564-p57qj\" (UID: \"a9d870f4-95e3-4941-9a77-e1b80afca9bd\") " pod="calico-system/calico-apiserver-747d4d9564-p57qj" Apr 17 23:40:01.235568 kubelet[2540]: I0417 23:40:01.235087 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2skt\" (UniqueName: \"kubernetes.io/projected/fe4b5225-389a-4c5f-90d9-d343b520891b-kube-api-access-m2skt\") pod \"calico-apiserver-747d4d9564-87p66\" (UID: \"fe4b5225-389a-4c5f-90d9-d343b520891b\") " pod="calico-system/calico-apiserver-747d4d9564-87p66" Apr 17 23:40:01.235568 kubelet[2540]: I0417 23:40:01.235106 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fea8698d-451a-456c-836e-d755be496f91-whisker-ca-bundle\") pod \"whisker-69974dd7c9-fzcgt\" (UID: \"fea8698d-451a-456c-836e-d755be496f91\") " pod="calico-system/whisker-69974dd7c9-fzcgt" Apr 17 23:40:01.235800 kubelet[2540]: I0417 23:40:01.235120 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzjsh\" (UniqueName: \"kubernetes.io/projected/e92690a1-a6c6-4a96-8b33-2a2ebd323317-kube-api-access-dzjsh\") pod \"goldmane-cccfbd5cf-hxd5z\" (UID: \"e92690a1-a6c6-4a96-8b33-2a2ebd323317\") " pod="calico-system/goldmane-cccfbd5cf-hxd5z" Apr 17 23:40:01.235800 kubelet[2540]: I0417 23:40:01.235133 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98a1a605-7c5c-4f5c-801d-322ef1144e09-config-volume\") pod \"coredns-66bc5c9577-lh2x8\" (UID: \"98a1a605-7c5c-4f5c-801d-322ef1144e09\") " pod="kube-system/coredns-66bc5c9577-lh2x8" Apr 17 23:40:01.235800 kubelet[2540]: I0417 23:40:01.235146 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fe4b5225-389a-4c5f-90d9-d343b520891b-calico-apiserver-certs\") pod \"calico-apiserver-747d4d9564-87p66\" (UID: \"fe4b5225-389a-4c5f-90d9-d343b520891b\") " pod="calico-system/calico-apiserver-747d4d9564-87p66" Apr 17 23:40:01.235800 kubelet[2540]: I0417 23:40:01.235160 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b465cf39-1a7a-43c8-8b20-c06a445d067b-tigera-ca-bundle\") pod \"calico-kube-controllers-5d5d6df97d-qjfqb\" (UID: \"b465cf39-1a7a-43c8-8b20-c06a445d067b\") " pod="calico-system/calico-kube-controllers-5d5d6df97d-qjfqb" Apr 17 23:40:01.235800 kubelet[2540]: I0417 23:40:01.235173 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e92690a1-a6c6-4a96-8b33-2a2ebd323317-config\") pod \"goldmane-cccfbd5cf-hxd5z\" (UID: \"e92690a1-a6c6-4a96-8b33-2a2ebd323317\") " pod="calico-system/goldmane-cccfbd5cf-hxd5z" Apr 17 23:40:01.235908 kubelet[2540]: I0417 23:40:01.235186 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e92690a1-a6c6-4a96-8b33-2a2ebd323317-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-hxd5z\" (UID: \"e92690a1-a6c6-4a96-8b33-2a2ebd323317\") " pod="calico-system/goldmane-cccfbd5cf-hxd5z" Apr 17 23:40:01.235908 kubelet[2540]: I0417 23:40:01.235209 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqdzm\" (UniqueName: \"kubernetes.io/projected/3b2079f5-4ea9-4796-8181-3d79d0da7db2-kube-api-access-nqdzm\") pod \"coredns-66bc5c9577-2kcx5\" (UID: \"3b2079f5-4ea9-4796-8181-3d79d0da7db2\") " pod="kube-system/coredns-66bc5c9577-2kcx5" Apr 17 23:40:01.235908 kubelet[2540]: I0417 23:40:01.235224 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgzs6\" (UniqueName: \"kubernetes.io/projected/fea8698d-451a-456c-836e-d755be496f91-kube-api-access-mgzs6\") pod \"whisker-69974dd7c9-fzcgt\" (UID: \"fea8698d-451a-456c-836e-d755be496f91\") " pod="calico-system/whisker-69974dd7c9-fzcgt" Apr 17 23:40:01.235908 kubelet[2540]: I0417 23:40:01.235237 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e92690a1-a6c6-4a96-8b33-2a2ebd323317-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-hxd5z\" (UID: \"e92690a1-a6c6-4a96-8b33-2a2ebd323317\") " pod="calico-system/goldmane-cccfbd5cf-hxd5z" Apr 17 23:40:01.235908 kubelet[2540]: I0417 23:40:01.235250 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b2079f5-4ea9-4796-8181-3d79d0da7db2-config-volume\") pod \"coredns-66bc5c9577-2kcx5\" (UID: \"3b2079f5-4ea9-4796-8181-3d79d0da7db2\") " pod="kube-system/coredns-66bc5c9577-2kcx5" Apr 17 23:40:01.236020 kubelet[2540]: I0417 23:40:01.235273 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4k7x\" (UniqueName: \"kubernetes.io/projected/b465cf39-1a7a-43c8-8b20-c06a445d067b-kube-api-access-w4k7x\") pod \"calico-kube-controllers-5d5d6df97d-qjfqb\" (UID: \"b465cf39-1a7a-43c8-8b20-c06a445d067b\") " pod="calico-system/calico-kube-controllers-5d5d6df97d-qjfqb" Apr 17 23:40:01.236020 kubelet[2540]: I0417 23:40:01.235293 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fea8698d-451a-456c-836e-d755be496f91-whisker-backend-key-pair\") pod \"whisker-69974dd7c9-fzcgt\" (UID: \"fea8698d-451a-456c-836e-d755be496f91\") " pod="calico-system/whisker-69974dd7c9-fzcgt" Apr 17 23:40:01.236020 kubelet[2540]: I0417 23:40:01.235310 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/fea8698d-451a-456c-836e-d755be496f91-nginx-config\") pod \"whisker-69974dd7c9-fzcgt\" (UID: \"fea8698d-451a-456c-836e-d755be496f91\") " pod="calico-system/whisker-69974dd7c9-fzcgt" Apr 17 23:40:01.241740 systemd[1]: Created slice kubepods-burstable-pod98a1a605_7c5c_4f5c_801d_322ef1144e09.slice - libcontainer container kubepods-burstable-pod98a1a605_7c5c_4f5c_801d_322ef1144e09.slice. Apr 17 23:40:01.250525 systemd[1]: Created slice kubepods-besteffort-podb465cf39_1a7a_43c8_8b20_c06a445d067b.slice - libcontainer container kubepods-besteffort-podb465cf39_1a7a_43c8_8b20_c06a445d067b.slice. Apr 17 23:40:01.260012 systemd[1]: Created slice kubepods-besteffort-podfea8698d_451a_456c_836e_d755be496f91.slice - libcontainer container kubepods-besteffort-podfea8698d_451a_456c_836e_d755be496f91.slice. Apr 17 23:40:01.267760 systemd[1]: Created slice kubepods-besteffort-podfe4b5225_389a_4c5f_90d9_d343b520891b.slice - libcontainer container kubepods-besteffort-podfe4b5225_389a_4c5f_90d9_d343b520891b.slice. Apr 17 23:40:01.520970 kubelet[2540]: E0417 23:40:01.520889 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:40:01.521712 containerd[1456]: time="2026-04-17T23:40:01.521631044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2kcx5,Uid:3b2079f5-4ea9-4796-8181-3d79d0da7db2,Namespace:kube-system,Attempt:0,}" Apr 17 23:40:01.536240 containerd[1456]: time="2026-04-17T23:40:01.536214077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747d4d9564-p57qj,Uid:a9d870f4-95e3-4941-9a77-e1b80afca9bd,Namespace:calico-system,Attempt:0,}" Apr 17 23:40:01.543681 containerd[1456]: time="2026-04-17T23:40:01.543657598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-hxd5z,Uid:e92690a1-a6c6-4a96-8b33-2a2ebd323317,Namespace:calico-system,Attempt:0,}" Apr 17 23:40:01.548867 kubelet[2540]: E0417 23:40:01.548841 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:40:01.551308 containerd[1456]: time="2026-04-17T23:40:01.550879420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lh2x8,Uid:98a1a605-7c5c-4f5c-801d-322ef1144e09,Namespace:kube-system,Attempt:0,}" Apr 17 23:40:01.558537 containerd[1456]: time="2026-04-17T23:40:01.558281391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d5d6df97d-qjfqb,Uid:b465cf39-1a7a-43c8-8b20-c06a445d067b,Namespace:calico-system,Attempt:0,}" Apr 17 23:40:01.566402 containerd[1456]: time="2026-04-17T23:40:01.566267482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69974dd7c9-fzcgt,Uid:fea8698d-451a-456c-836e-d755be496f91,Namespace:calico-system,Attempt:0,}" Apr 17 23:40:01.572615 containerd[1456]: time="2026-04-17T23:40:01.572415465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747d4d9564-87p66,Uid:fe4b5225-389a-4c5f-90d9-d343b520891b,Namespace:calico-system,Attempt:0,}" Apr 17 23:40:01.690067 containerd[1456]: time="2026-04-17T23:40:01.690019297Z" level=error msg="Failed to destroy network for sandbox \"3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.690538 containerd[1456]: time="2026-04-17T23:40:01.690397897Z" level=error msg="encountered an error cleaning up failed sandbox \"3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.690538 containerd[1456]: time="2026-04-17T23:40:01.690443937Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2kcx5,Uid:3b2079f5-4ea9-4796-8181-3d79d0da7db2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.690889 kubelet[2540]: E0417 23:40:01.690792 2540 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.692340 kubelet[2540]: E0417 23:40:01.692314 2540 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2kcx5" Apr 17 23:40:01.692458 kubelet[2540]: E0417 23:40:01.692346 2540 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2kcx5" Apr 17 23:40:01.693512 kubelet[2540]: E0417 23:40:01.692516 2540 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-2kcx5_kube-system(3b2079f5-4ea9-4796-8181-3d79d0da7db2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-2kcx5_kube-system(3b2079f5-4ea9-4796-8181-3d79d0da7db2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-2kcx5" podUID="3b2079f5-4ea9-4796-8181-3d79d0da7db2" Apr 17 23:40:01.727864 containerd[1456]: time="2026-04-17T23:40:01.727826743Z" level=error msg="Failed to destroy network for sandbox \"918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.728323 containerd[1456]: time="2026-04-17T23:40:01.728300043Z" level=error msg="encountered an error cleaning up failed sandbox \"918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.728420 containerd[1456]: time="2026-04-17T23:40:01.728398523Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747d4d9564-p57qj,Uid:a9d870f4-95e3-4941-9a77-e1b80afca9bd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.728805 kubelet[2540]: E0417 23:40:01.728758 2540 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.728874 kubelet[2540]: E0417 23:40:01.728813 2540 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-747d4d9564-p57qj" Apr 17 23:40:01.728874 kubelet[2540]: E0417 23:40:01.728833 2540 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-747d4d9564-p57qj" Apr 17 23:40:01.729015 kubelet[2540]: E0417 23:40:01.728875 2540 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-747d4d9564-p57qj_calico-system(a9d870f4-95e3-4941-9a77-e1b80afca9bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-747d4d9564-p57qj_calico-system(a9d870f4-95e3-4941-9a77-e1b80afca9bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-747d4d9564-p57qj" podUID="a9d870f4-95e3-4941-9a77-e1b80afca9bd" Apr 17 23:40:01.771239 containerd[1456]: time="2026-04-17T23:40:01.771132433Z" level=error msg="Failed to destroy network for sandbox \"7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.785081 systemd[1]: Created slice kubepods-besteffort-pod7519db54_398f_4489_8839_90013af059d5.slice - libcontainer container kubepods-besteffort-pod7519db54_398f_4489_8839_90013af059d5.slice. Apr 17 23:40:01.786755 containerd[1456]: time="2026-04-17T23:40:01.785650056Z" level=error msg="encountered an error cleaning up failed sandbox \"7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.786828 containerd[1456]: time="2026-04-17T23:40:01.786798944Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-hxd5z,Uid:e92690a1-a6c6-4a96-8b33-2a2ebd323317,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.789185 kubelet[2540]: E0417 23:40:01.789151 2540 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.789235 kubelet[2540]: E0417 23:40:01.789201 2540 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-hxd5z" Apr 17 23:40:01.789235 kubelet[2540]: E0417 23:40:01.789220 2540 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-hxd5z" Apr 17 23:40:01.789297 kubelet[2540]: E0417 23:40:01.789259 2540 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-hxd5z_calico-system(e92690a1-a6c6-4a96-8b33-2a2ebd323317)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-hxd5z_calico-system(e92690a1-a6c6-4a96-8b33-2a2ebd323317)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-hxd5z" podUID="e92690a1-a6c6-4a96-8b33-2a2ebd323317" Apr 17 23:40:01.791389 containerd[1456]: time="2026-04-17T23:40:01.791357449Z" level=error msg="Failed to destroy network for sandbox \"6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.791932 containerd[1456]: time="2026-04-17T23:40:01.791900398Z" level=error msg="encountered an error cleaning up failed sandbox \"6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.791985 containerd[1456]: time="2026-04-17T23:40:01.791942228Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lh2x8,Uid:98a1a605-7c5c-4f5c-801d-322ef1144e09,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.798680 kubelet[2540]: E0417 23:40:01.798651 2540 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.798863 kubelet[2540]: E0417 23:40:01.798689 2540 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-lh2x8" Apr 17 23:40:01.798902 kubelet[2540]: E0417 23:40:01.798867 2540 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-lh2x8" Apr 17 23:40:01.798936 kubelet[2540]: E0417 23:40:01.798903 2540 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-lh2x8_kube-system(98a1a605-7c5c-4f5c-801d-322ef1144e09)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-lh2x8_kube-system(98a1a605-7c5c-4f5c-801d-322ef1144e09)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-lh2x8" podUID="98a1a605-7c5c-4f5c-801d-322ef1144e09" Apr 17 23:40:01.801525 containerd[1456]: time="2026-04-17T23:40:01.799478150Z" level=error msg="Failed to destroy network for sandbox \"3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.801609 containerd[1456]: time="2026-04-17T23:40:01.799870529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9m4mr,Uid:7519db54-398f-4489-8839-90013af059d5,Namespace:calico-system,Attempt:0,}" Apr 17 23:40:01.802857 containerd[1456]: time="2026-04-17T23:40:01.800528358Z" level=error msg="Failed to destroy network for sandbox \"5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.803291 containerd[1456]: time="2026-04-17T23:40:01.801504657Z" level=error msg="encountered an error cleaning up failed sandbox \"3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.803394 containerd[1456]: time="2026-04-17T23:40:01.803372935Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69974dd7c9-fzcgt,Uid:fea8698d-451a-456c-836e-d755be496f91,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.803786 kubelet[2540]: E0417 23:40:01.803591 2540 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.803786 kubelet[2540]: E0417 23:40:01.803646 2540 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69974dd7c9-fzcgt" Apr 17 23:40:01.803786 kubelet[2540]: E0417 23:40:01.803660 2540 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69974dd7c9-fzcgt" Apr 17 23:40:01.803993 containerd[1456]: time="2026-04-17T23:40:01.803687095Z" level=error msg="encountered an error cleaning up failed sandbox \"5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.803993 containerd[1456]: time="2026-04-17T23:40:01.803954884Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d5d6df97d-qjfqb,Uid:b465cf39-1a7a-43c8-8b20-c06a445d067b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.804981 kubelet[2540]: E0417 23:40:01.804287 2540 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-69974dd7c9-fzcgt_calico-system(fea8698d-451a-456c-836e-d755be496f91)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-69974dd7c9-fzcgt_calico-system(fea8698d-451a-456c-836e-d755be496f91)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-69974dd7c9-fzcgt" podUID="fea8698d-451a-456c-836e-d755be496f91" Apr 17 23:40:01.804981 kubelet[2540]: E0417 23:40:01.804424 2540 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.804981 kubelet[2540]: E0417 23:40:01.804723 2540 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d5d6df97d-qjfqb" Apr 17 23:40:01.805096 kubelet[2540]: E0417 23:40:01.804951 2540 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d5d6df97d-qjfqb" Apr 17 23:40:01.805096 kubelet[2540]: E0417 23:40:01.804984 2540 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d5d6df97d-qjfqb_calico-system(b465cf39-1a7a-43c8-8b20-c06a445d067b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d5d6df97d-qjfqb_calico-system(b465cf39-1a7a-43c8-8b20-c06a445d067b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d5d6df97d-qjfqb" podUID="b465cf39-1a7a-43c8-8b20-c06a445d067b" Apr 17 23:40:01.806119 containerd[1456]: time="2026-04-17T23:40:01.806098012Z" level=error msg="Failed to destroy network for sandbox \"6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.806802 containerd[1456]: time="2026-04-17T23:40:01.806779411Z" level=error msg="encountered an error cleaning up failed sandbox \"6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.806911 containerd[1456]: time="2026-04-17T23:40:01.806890501Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747d4d9564-87p66,Uid:fe4b5225-389a-4c5f-90d9-d343b520891b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.807253 kubelet[2540]: E0417 23:40:01.807234 2540 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.807710 kubelet[2540]: E0417 23:40:01.807637 2540 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-747d4d9564-87p66" Apr 17 23:40:01.807814 kubelet[2540]: E0417 23:40:01.807797 2540 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-747d4d9564-87p66" Apr 17 23:40:01.807946 kubelet[2540]: E0417 23:40:01.807928 2540 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-747d4d9564-87p66_calico-system(fe4b5225-389a-4c5f-90d9-d343b520891b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-747d4d9564-87p66_calico-system(fe4b5225-389a-4c5f-90d9-d343b520891b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-747d4d9564-87p66" podUID="fe4b5225-389a-4c5f-90d9-d343b520891b" Apr 17 23:40:01.860147 containerd[1456]: time="2026-04-17T23:40:01.860088939Z" level=error msg="Failed to destroy network for sandbox \"251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.860459 containerd[1456]: time="2026-04-17T23:40:01.860428108Z" level=error msg="encountered an error cleaning up failed sandbox \"251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.860496 containerd[1456]: time="2026-04-17T23:40:01.860471258Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9m4mr,Uid:7519db54-398f-4489-8839-90013af059d5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.860882 kubelet[2540]: E0417 23:40:01.860845 2540 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:01.860979 kubelet[2540]: E0417 23:40:01.860891 2540 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9m4mr" Apr 17 23:40:01.860979 kubelet[2540]: E0417 23:40:01.860909 2540 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9m4mr" Apr 17 23:40:01.860979 kubelet[2540]: E0417 23:40:01.860958 2540 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9m4mr_calico-system(7519db54-398f-4489-8839-90013af059d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9m4mr_calico-system(7519db54-398f-4489-8839-90013af059d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9m4mr" podUID="7519db54-398f-4489-8839-90013af059d5" Apr 17 23:40:01.889134 kubelet[2540]: I0417 23:40:01.889116 2540 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" Apr 17 23:40:01.889858 containerd[1456]: time="2026-04-17T23:40:01.889833924Z" level=info msg="StopPodSandbox for \"6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f\"" Apr 17 23:40:01.890005 containerd[1456]: time="2026-04-17T23:40:01.889969634Z" level=info msg="Ensure that sandbox 6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f in task-service has been cleanup successfully" Apr 17 23:40:01.891376 kubelet[2540]: I0417 23:40:01.891279 2540 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" Apr 17 23:40:01.892266 containerd[1456]: time="2026-04-17T23:40:01.892245001Z" level=info msg="StopPodSandbox for \"3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604\"" Apr 17 23:40:01.892375 containerd[1456]: time="2026-04-17T23:40:01.892358571Z" level=info msg="Ensure that sandbox 3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604 in task-service has been cleanup successfully" Apr 17 23:40:01.897056 kubelet[2540]: I0417 23:40:01.896378 2540 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" Apr 17 23:40:01.898329 containerd[1456]: time="2026-04-17T23:40:01.897746855Z" level=info msg="StopPodSandbox for \"5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914\"" Apr 17 23:40:01.898994 containerd[1456]: time="2026-04-17T23:40:01.898968644Z" level=info msg="Ensure that sandbox 5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914 in task-service has been cleanup successfully" Apr 17 23:40:01.900133 kubelet[2540]: I0417 23:40:01.900117 2540 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" Apr 17 23:40:01.901418 containerd[1456]: time="2026-04-17T23:40:01.901070931Z" level=info msg="StopPodSandbox for \"6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416\"" Apr 17 23:40:01.901418 containerd[1456]: time="2026-04-17T23:40:01.901223171Z" level=info msg="Ensure that sandbox 6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416 in task-service has been cleanup successfully" Apr 17 23:40:01.905014 kubelet[2540]: I0417 23:40:01.904996 2540 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" Apr 17 23:40:01.906354 containerd[1456]: time="2026-04-17T23:40:01.906134335Z" level=info msg="StopPodSandbox for \"3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6\"" Apr 17 23:40:01.906449 containerd[1456]: time="2026-04-17T23:40:01.906246955Z" level=info msg="Ensure that sandbox 3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6 in task-service has been cleanup successfully" Apr 17 23:40:01.926203 kubelet[2540]: I0417 23:40:01.925965 2540 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" Apr 17 23:40:01.932034 containerd[1456]: time="2026-04-17T23:40:01.931452196Z" level=info msg="StopPodSandbox for \"251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029\"" Apr 17 23:40:01.932034 containerd[1456]: time="2026-04-17T23:40:01.931599975Z" level=info msg="Ensure that sandbox 251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029 in task-service has been cleanup successfully" Apr 17 23:40:01.942666 kubelet[2540]: I0417 23:40:01.941336 2540 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" Apr 17 23:40:01.949598 containerd[1456]: time="2026-04-17T23:40:01.949571694Z" level=info msg="CreateContainer within sandbox \"31f79d66d9eb777c0521a08b0299608c1710b0c930cb4e05d42d194255848707\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 17 23:40:01.950516 kubelet[2540]: I0417 23:40:01.950500 2540 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" Apr 17 23:40:01.953158 containerd[1456]: time="2026-04-17T23:40:01.950030024Z" level=info msg="StopPodSandbox for \"7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378\"" Apr 17 23:40:01.953355 containerd[1456]: time="2026-04-17T23:40:01.953337810Z" level=info msg="Ensure that sandbox 7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378 in task-service has been cleanup successfully" Apr 17 23:40:01.960878 containerd[1456]: time="2026-04-17T23:40:01.952961150Z" level=info msg="StopPodSandbox for \"918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046\"" Apr 17 23:40:01.961106 containerd[1456]: time="2026-04-17T23:40:01.961088991Z" level=info msg="Ensure that sandbox 918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046 in task-service has been cleanup successfully" Apr 17 23:40:02.004975 containerd[1456]: time="2026-04-17T23:40:02.004931670Z" level=info msg="CreateContainer within sandbox \"31f79d66d9eb777c0521a08b0299608c1710b0c930cb4e05d42d194255848707\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"091bb9b76b32612ca0cdbad784ad26d50a3fc1766f3a5848bc91509bfe68ec97\"" Apr 17 23:40:02.007253 containerd[1456]: time="2026-04-17T23:40:02.007227568Z" level=info msg="StartContainer for \"091bb9b76b32612ca0cdbad784ad26d50a3fc1766f3a5848bc91509bfe68ec97\"" Apr 17 23:40:02.028735 containerd[1456]: time="2026-04-17T23:40:02.028608724Z" level=error msg="StopPodSandbox for \"6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f\" failed" error="failed to destroy network for sandbox \"6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:02.029079 kubelet[2540]: E0417 23:40:02.028933 2540 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" Apr 17 23:40:02.029079 kubelet[2540]: E0417 23:40:02.028980 2540 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f"} Apr 17 23:40:02.029079 kubelet[2540]: E0417 23:40:02.029022 2540 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fe4b5225-389a-4c5f-90d9-d343b520891b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:40:02.029079 kubelet[2540]: E0417 23:40:02.029047 2540 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fe4b5225-389a-4c5f-90d9-d343b520891b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-747d4d9564-87p66" podUID="fe4b5225-389a-4c5f-90d9-d343b520891b" Apr 17 23:40:02.036377 containerd[1456]: time="2026-04-17T23:40:02.036186126Z" level=error msg="StopPodSandbox for \"3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604\" failed" error="failed to destroy network for sandbox \"3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:02.036613 kubelet[2540]: E0417 23:40:02.036478 2540 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" Apr 17 23:40:02.036613 kubelet[2540]: E0417 23:40:02.036548 2540 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604"} Apr 17 23:40:02.036613 kubelet[2540]: E0417 23:40:02.036572 2540 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fea8698d-451a-456c-836e-d755be496f91\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:40:02.036613 kubelet[2540]: E0417 23:40:02.036592 2540 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fea8698d-451a-456c-836e-d755be496f91\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-69974dd7c9-fzcgt" podUID="fea8698d-451a-456c-836e-d755be496f91" Apr 17 23:40:02.045823 containerd[1456]: time="2026-04-17T23:40:02.045548986Z" level=error msg="StopPodSandbox for \"3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6\" failed" error="failed to destroy network for sandbox \"3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:02.045901 kubelet[2540]: E0417 23:40:02.045721 2540 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" Apr 17 23:40:02.045901 kubelet[2540]: E0417 23:40:02.045751 2540 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6"} Apr 17 23:40:02.045901 kubelet[2540]: E0417 23:40:02.045773 2540 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3b2079f5-4ea9-4796-8181-3d79d0da7db2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:40:02.045901 kubelet[2540]: E0417 23:40:02.045794 2540 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3b2079f5-4ea9-4796-8181-3d79d0da7db2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-2kcx5" podUID="3b2079f5-4ea9-4796-8181-3d79d0da7db2" Apr 17 23:40:02.050998 containerd[1456]: time="2026-04-17T23:40:02.050276170Z" level=error msg="StopPodSandbox for \"5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914\" failed" error="failed to destroy network for sandbox \"5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:02.051241 kubelet[2540]: E0417 23:40:02.051122 2540 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" Apr 17 23:40:02.051241 kubelet[2540]: E0417 23:40:02.051174 2540 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914"} Apr 17 23:40:02.051241 kubelet[2540]: E0417 23:40:02.051194 2540 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b465cf39-1a7a-43c8-8b20-c06a445d067b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:40:02.051241 kubelet[2540]: E0417 23:40:02.051214 2540 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b465cf39-1a7a-43c8-8b20-c06a445d067b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d5d6df97d-qjfqb" podUID="b465cf39-1a7a-43c8-8b20-c06a445d067b" Apr 17 23:40:02.052894 containerd[1456]: time="2026-04-17T23:40:02.052870968Z" level=error msg="StopPodSandbox for \"6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416\" failed" error="failed to destroy network for sandbox \"6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:02.053076 kubelet[2540]: E0417 23:40:02.053054 2540 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" Apr 17 23:40:02.053147 kubelet[2540]: E0417 23:40:02.053134 2540 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416"} Apr 17 23:40:02.053219 kubelet[2540]: E0417 23:40:02.053205 2540 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"98a1a605-7c5c-4f5c-801d-322ef1144e09\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:40:02.053303 kubelet[2540]: E0417 23:40:02.053287 2540 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"98a1a605-7c5c-4f5c-801d-322ef1144e09\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-lh2x8" podUID="98a1a605-7c5c-4f5c-801d-322ef1144e09" Apr 17 23:40:02.058744 containerd[1456]: time="2026-04-17T23:40:02.058681041Z" level=error msg="StopPodSandbox for \"251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029\" failed" error="failed to destroy network for sandbox \"251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:02.058979 kubelet[2540]: E0417 23:40:02.058952 2540 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" Apr 17 23:40:02.059066 kubelet[2540]: E0417 23:40:02.059052 2540 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029"} Apr 17 23:40:02.059134 kubelet[2540]: E0417 23:40:02.059119 2540 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7519db54-398f-4489-8839-90013af059d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:40:02.059288 kubelet[2540]: E0417 23:40:02.059261 2540 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7519db54-398f-4489-8839-90013af059d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9m4mr" podUID="7519db54-398f-4489-8839-90013af059d5" Apr 17 23:40:02.078837 systemd[1]: Started cri-containerd-091bb9b76b32612ca0cdbad784ad26d50a3fc1766f3a5848bc91509bfe68ec97.scope - libcontainer container 091bb9b76b32612ca0cdbad784ad26d50a3fc1766f3a5848bc91509bfe68ec97. Apr 17 23:40:02.090547 containerd[1456]: time="2026-04-17T23:40:02.090505626Z" level=error msg="StopPodSandbox for \"7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378\" failed" error="failed to destroy network for sandbox \"7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:02.090873 kubelet[2540]: E0417 23:40:02.090841 2540 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" Apr 17 23:40:02.092655 kubelet[2540]: E0417 23:40:02.092632 2540 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378"} Apr 17 23:40:02.092710 kubelet[2540]: E0417 23:40:02.092672 2540 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e92690a1-a6c6-4a96-8b33-2a2ebd323317\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:40:02.092776 kubelet[2540]: E0417 23:40:02.092736 2540 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e92690a1-a6c6-4a96-8b33-2a2ebd323317\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-hxd5z" podUID="e92690a1-a6c6-4a96-8b33-2a2ebd323317" Apr 17 23:40:02.093553 containerd[1456]: time="2026-04-17T23:40:02.093523893Z" level=error msg="StopPodSandbox for \"918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046\" failed" error="failed to destroy network for sandbox \"918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:02.093647 kubelet[2540]: E0417 23:40:02.093623 2540 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" Apr 17 23:40:02.093674 kubelet[2540]: E0417 23:40:02.093650 2540 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046"} Apr 17 23:40:02.093722 kubelet[2540]: E0417 23:40:02.093666 2540 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a9d870f4-95e3-4941-9a77-e1b80afca9bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:40:02.093768 kubelet[2540]: E0417 23:40:02.093686 2540 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a9d870f4-95e3-4941-9a77-e1b80afca9bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-747d4d9564-p57qj" podUID="a9d870f4-95e3-4941-9a77-e1b80afca9bd" Apr 17 23:40:02.117181 containerd[1456]: time="2026-04-17T23:40:02.117147897Z" level=info msg="StartContainer for \"091bb9b76b32612ca0cdbad784ad26d50a3fc1766f3a5848bc91509bfe68ec97\" returns successfully" Apr 17 23:40:02.502490 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416-shm.mount: Deactivated successfully. Apr 17 23:40:02.502835 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378-shm.mount: Deactivated successfully. Apr 17 23:40:02.502936 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046-shm.mount: Deactivated successfully. Apr 17 23:40:02.503016 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6-shm.mount: Deactivated successfully. Apr 17 23:40:02.954988 containerd[1456]: time="2026-04-17T23:40:02.954953300Z" level=info msg="StopPodSandbox for \"3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604\"" Apr 17 23:40:02.984195 kubelet[2540]: I0417 23:40:02.983919 2540 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bfnsx" podStartSLOduration=3.420407225 podStartE2EDuration="11.983901169s" podCreationTimestamp="2026-04-17 23:39:51 +0000 UTC" firstStartedPulling="2026-04-17 23:39:51.874312159 +0000 UTC m=+15.208209526" lastFinishedPulling="2026-04-17 23:40:00.437806103 +0000 UTC m=+23.771703470" observedRunningTime="2026-04-17 23:40:02.98337763 +0000 UTC m=+26.317274997" watchObservedRunningTime="2026-04-17 23:40:02.983901169 +0000 UTC m=+26.317798536" Apr 17 23:40:03.051248 containerd[1456]: 2026-04-17 23:40:03.011 [INFO][3803] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" Apr 17 23:40:03.051248 containerd[1456]: 2026-04-17 23:40:03.011 [INFO][3803] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" iface="eth0" netns="/var/run/netns/cni-e6a987f6-2f81-dc03-8e29-af7957e8c65e" Apr 17 23:40:03.051248 containerd[1456]: 2026-04-17 23:40:03.012 [INFO][3803] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" iface="eth0" netns="/var/run/netns/cni-e6a987f6-2f81-dc03-8e29-af7957e8c65e" Apr 17 23:40:03.051248 containerd[1456]: 2026-04-17 23:40:03.014 [INFO][3803] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" iface="eth0" netns="/var/run/netns/cni-e6a987f6-2f81-dc03-8e29-af7957e8c65e" Apr 17 23:40:03.051248 containerd[1456]: 2026-04-17 23:40:03.014 [INFO][3803] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" Apr 17 23:40:03.051248 containerd[1456]: 2026-04-17 23:40:03.014 [INFO][3803] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" Apr 17 23:40:03.051248 containerd[1456]: 2026-04-17 23:40:03.033 [INFO][3810] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" HandleID="k8s-pod-network.3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" Workload="172--238--189--76-k8s-whisker--69974dd7c9--fzcgt-eth0" Apr 17 23:40:03.051248 containerd[1456]: 2026-04-17 23:40:03.033 [INFO][3810] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:03.051248 containerd[1456]: 2026-04-17 23:40:03.033 [INFO][3810] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:03.051248 containerd[1456]: 2026-04-17 23:40:03.042 [WARNING][3810] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" HandleID="k8s-pod-network.3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" Workload="172--238--189--76-k8s-whisker--69974dd7c9--fzcgt-eth0" Apr 17 23:40:03.051248 containerd[1456]: 2026-04-17 23:40:03.042 [INFO][3810] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" HandleID="k8s-pod-network.3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" Workload="172--238--189--76-k8s-whisker--69974dd7c9--fzcgt-eth0" Apr 17 23:40:03.051248 containerd[1456]: 2026-04-17 23:40:03.043 [INFO][3810] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:03.051248 containerd[1456]: 2026-04-17 23:40:03.047 [INFO][3803] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" Apr 17 23:40:03.052372 containerd[1456]: time="2026-04-17T23:40:03.051788978Z" level=info msg="TearDown network for sandbox \"3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604\" successfully" Apr 17 23:40:03.052372 containerd[1456]: time="2026-04-17T23:40:03.051815948Z" level=info msg="StopPodSandbox for \"3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604\" returns successfully" Apr 17 23:40:03.055421 systemd[1]: run-netns-cni\x2de6a987f6\x2d2f81\x2ddc03\x2d8e29\x2daf7957e8c65e.mount: Deactivated successfully. Apr 17 23:40:03.150752 kubelet[2540]: I0417 23:40:03.149659 2540 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/fea8698d-451a-456c-836e-d755be496f91-nginx-config\") pod \"fea8698d-451a-456c-836e-d755be496f91\" (UID: \"fea8698d-451a-456c-836e-d755be496f91\") " Apr 17 23:40:03.150752 kubelet[2540]: I0417 23:40:03.149692 2540 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fea8698d-451a-456c-836e-d755be496f91-whisker-ca-bundle\") pod \"fea8698d-451a-456c-836e-d755be496f91\" (UID: \"fea8698d-451a-456c-836e-d755be496f91\") " Apr 17 23:40:03.150752 kubelet[2540]: I0417 23:40:03.149922 2540 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgzs6\" (UniqueName: \"kubernetes.io/projected/fea8698d-451a-456c-836e-d755be496f91-kube-api-access-mgzs6\") pod \"fea8698d-451a-456c-836e-d755be496f91\" (UID: \"fea8698d-451a-456c-836e-d755be496f91\") " Apr 17 23:40:03.150752 kubelet[2540]: I0417 23:40:03.149943 2540 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fea8698d-451a-456c-836e-d755be496f91-whisker-backend-key-pair\") pod \"fea8698d-451a-456c-836e-d755be496f91\" (UID: \"fea8698d-451a-456c-836e-d755be496f91\") " Apr 17 23:40:03.150752 kubelet[2540]: I0417 23:40:03.150162 2540 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fea8698d-451a-456c-836e-d755be496f91-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "fea8698d-451a-456c-836e-d755be496f91" (UID: "fea8698d-451a-456c-836e-d755be496f91"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:40:03.150916 kubelet[2540]: I0417 23:40:03.150498 2540 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fea8698d-451a-456c-836e-d755be496f91-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "fea8698d-451a-456c-836e-d755be496f91" (UID: "fea8698d-451a-456c-836e-d755be496f91"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:40:03.155508 systemd[1]: var-lib-kubelet-pods-fea8698d\x2d451a\x2d456c\x2d836e\x2dd755be496f91-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmgzs6.mount: Deactivated successfully. Apr 17 23:40:03.155986 kubelet[2540]: I0417 23:40:03.155764 2540 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fea8698d-451a-456c-836e-d755be496f91-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "fea8698d-451a-456c-836e-d755be496f91" (UID: "fea8698d-451a-456c-836e-d755be496f91"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 23:40:03.156070 systemd[1]: var-lib-kubelet-pods-fea8698d\x2d451a\x2d456c\x2d836e\x2dd755be496f91-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 17 23:40:03.156496 kubelet[2540]: I0417 23:40:03.156469 2540 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fea8698d-451a-456c-836e-d755be496f91-kube-api-access-mgzs6" (OuterVolumeSpecName: "kube-api-access-mgzs6") pod "fea8698d-451a-456c-836e-d755be496f91" (UID: "fea8698d-451a-456c-836e-d755be496f91"). InnerVolumeSpecName "kube-api-access-mgzs6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:40:03.251061 kubelet[2540]: I0417 23:40:03.250980 2540 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mgzs6\" (UniqueName: \"kubernetes.io/projected/fea8698d-451a-456c-836e-d755be496f91-kube-api-access-mgzs6\") on node \"172-238-189-76\" DevicePath \"\"" Apr 17 23:40:03.251061 kubelet[2540]: I0417 23:40:03.250999 2540 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fea8698d-451a-456c-836e-d755be496f91-whisker-backend-key-pair\") on node \"172-238-189-76\" DevicePath \"\"" Apr 17 23:40:03.251061 kubelet[2540]: I0417 23:40:03.251008 2540 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/fea8698d-451a-456c-836e-d755be496f91-nginx-config\") on node \"172-238-189-76\" DevicePath \"\"" Apr 17 23:40:03.251061 kubelet[2540]: I0417 23:40:03.251017 2540 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fea8698d-451a-456c-836e-d755be496f91-whisker-ca-bundle\") on node \"172-238-189-76\" DevicePath \"\"" Apr 17 23:40:03.962934 kubelet[2540]: I0417 23:40:03.961532 2540 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:40:03.966767 systemd[1]: Removed slice kubepods-besteffort-podfea8698d_451a_456c_836e_d755be496f91.slice - libcontainer container kubepods-besteffort-podfea8698d_451a_456c_836e_d755be496f91.slice. Apr 17 23:40:04.036307 systemd[1]: Created slice kubepods-besteffort-pod80f13afd_e09a_403a_973f_ab60df4136ab.slice - libcontainer container kubepods-besteffort-pod80f13afd_e09a_403a_973f_ab60df4136ab.slice. Apr 17 23:40:04.059076 kubelet[2540]: I0417 23:40:04.059029 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/80f13afd-e09a-403a-973f-ab60df4136ab-nginx-config\") pod \"whisker-76bc6d445b-mr5pv\" (UID: \"80f13afd-e09a-403a-973f-ab60df4136ab\") " pod="calico-system/whisker-76bc6d445b-mr5pv" Apr 17 23:40:04.059076 kubelet[2540]: I0417 23:40:04.059059 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/80f13afd-e09a-403a-973f-ab60df4136ab-whisker-backend-key-pair\") pod \"whisker-76bc6d445b-mr5pv\" (UID: \"80f13afd-e09a-403a-973f-ab60df4136ab\") " pod="calico-system/whisker-76bc6d445b-mr5pv" Apr 17 23:40:04.059422 kubelet[2540]: I0417 23:40:04.059081 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc4mb\" (UniqueName: \"kubernetes.io/projected/80f13afd-e09a-403a-973f-ab60df4136ab-kube-api-access-bc4mb\") pod \"whisker-76bc6d445b-mr5pv\" (UID: \"80f13afd-e09a-403a-973f-ab60df4136ab\") " pod="calico-system/whisker-76bc6d445b-mr5pv" Apr 17 23:40:04.059422 kubelet[2540]: I0417 23:40:04.059108 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80f13afd-e09a-403a-973f-ab60df4136ab-whisker-ca-bundle\") pod \"whisker-76bc6d445b-mr5pv\" (UID: \"80f13afd-e09a-403a-973f-ab60df4136ab\") " pod="calico-system/whisker-76bc6d445b-mr5pv" Apr 17 23:40:04.345303 containerd[1456]: time="2026-04-17T23:40:04.345266283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76bc6d445b-mr5pv,Uid:80f13afd-e09a-403a-973f-ab60df4136ab,Namespace:calico-system,Attempt:0,}" Apr 17 23:40:04.456687 systemd-networkd[1376]: cali0b97d47728a: Link UP Apr 17 23:40:04.457162 systemd-networkd[1376]: cali0b97d47728a: Gained carrier Apr 17 23:40:04.475803 containerd[1456]: 2026-04-17 23:40:04.376 [ERROR][3919] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:40:04.475803 containerd[1456]: 2026-04-17 23:40:04.386 [INFO][3919] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--189--76-k8s-whisker--76bc6d445b--mr5pv-eth0 whisker-76bc6d445b- calico-system 80f13afd-e09a-403a-973f-ab60df4136ab 886 0 2026-04-17 23:40:04 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:76bc6d445b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-238-189-76 whisker-76bc6d445b-mr5pv eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0b97d47728a [] [] }} ContainerID="04456d78a8dbbdad2fa0b6c6ca446387f515569ae5c121509596fc13b5fd5bc5" Namespace="calico-system" Pod="whisker-76bc6d445b-mr5pv" WorkloadEndpoint="172--238--189--76-k8s-whisker--76bc6d445b--mr5pv-" Apr 17 23:40:04.475803 containerd[1456]: 2026-04-17 23:40:04.386 [INFO][3919] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="04456d78a8dbbdad2fa0b6c6ca446387f515569ae5c121509596fc13b5fd5bc5" Namespace="calico-system" Pod="whisker-76bc6d445b-mr5pv" WorkloadEndpoint="172--238--189--76-k8s-whisker--76bc6d445b--mr5pv-eth0" Apr 17 23:40:04.475803 containerd[1456]: 2026-04-17 23:40:04.410 [INFO][3931] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04456d78a8dbbdad2fa0b6c6ca446387f515569ae5c121509596fc13b5fd5bc5" HandleID="k8s-pod-network.04456d78a8dbbdad2fa0b6c6ca446387f515569ae5c121509596fc13b5fd5bc5" Workload="172--238--189--76-k8s-whisker--76bc6d445b--mr5pv-eth0" Apr 17 23:40:04.475803 containerd[1456]: 2026-04-17 23:40:04.415 [INFO][3931] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="04456d78a8dbbdad2fa0b6c6ca446387f515569ae5c121509596fc13b5fd5bc5" HandleID="k8s-pod-network.04456d78a8dbbdad2fa0b6c6ca446387f515569ae5c121509596fc13b5fd5bc5" Workload="172--238--189--76-k8s-whisker--76bc6d445b--mr5pv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036a050), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-189-76", "pod":"whisker-76bc6d445b-mr5pv", "timestamp":"2026-04-17 23:40:04.410173231 +0000 UTC"}, Hostname:"172-238-189-76", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00030e000)} Apr 17 23:40:04.475803 containerd[1456]: 2026-04-17 23:40:04.415 [INFO][3931] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:04.475803 containerd[1456]: 2026-04-17 23:40:04.415 [INFO][3931] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:04.475803 containerd[1456]: 2026-04-17 23:40:04.415 [INFO][3931] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-189-76' Apr 17 23:40:04.475803 containerd[1456]: 2026-04-17 23:40:04.418 [INFO][3931] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.04456d78a8dbbdad2fa0b6c6ca446387f515569ae5c121509596fc13b5fd5bc5" host="172-238-189-76" Apr 17 23:40:04.475803 containerd[1456]: 2026-04-17 23:40:04.421 [INFO][3931] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-189-76" Apr 17 23:40:04.475803 containerd[1456]: 2026-04-17 23:40:04.428 [INFO][3931] ipam/ipam.go 526: Trying affinity for 192.168.75.64/26 host="172-238-189-76" Apr 17 23:40:04.475803 containerd[1456]: 2026-04-17 23:40:04.430 [INFO][3931] ipam/ipam.go 160: Attempting to load block cidr=192.168.75.64/26 host="172-238-189-76" Apr 17 23:40:04.475803 containerd[1456]: 2026-04-17 23:40:04.432 [INFO][3931] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.75.64/26 host="172-238-189-76" Apr 17 23:40:04.475803 containerd[1456]: 2026-04-17 23:40:04.432 [INFO][3931] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.75.64/26 handle="k8s-pod-network.04456d78a8dbbdad2fa0b6c6ca446387f515569ae5c121509596fc13b5fd5bc5" host="172-238-189-76" Apr 17 23:40:04.475803 containerd[1456]: 2026-04-17 23:40:04.433 [INFO][3931] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.04456d78a8dbbdad2fa0b6c6ca446387f515569ae5c121509596fc13b5fd5bc5 Apr 17 23:40:04.475803 containerd[1456]: 2026-04-17 23:40:04.438 [INFO][3931] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.75.64/26 handle="k8s-pod-network.04456d78a8dbbdad2fa0b6c6ca446387f515569ae5c121509596fc13b5fd5bc5" host="172-238-189-76" Apr 17 23:40:04.475803 containerd[1456]: 2026-04-17 23:40:04.445 [INFO][3931] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.75.65/26] block=192.168.75.64/26 handle="k8s-pod-network.04456d78a8dbbdad2fa0b6c6ca446387f515569ae5c121509596fc13b5fd5bc5" host="172-238-189-76" Apr 17 23:40:04.475803 containerd[1456]: 2026-04-17 23:40:04.445 [INFO][3931] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.75.65/26] handle="k8s-pod-network.04456d78a8dbbdad2fa0b6c6ca446387f515569ae5c121509596fc13b5fd5bc5" host="172-238-189-76" Apr 17 23:40:04.475803 containerd[1456]: 2026-04-17 23:40:04.445 [INFO][3931] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:04.475803 containerd[1456]: 2026-04-17 23:40:04.445 [INFO][3931] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.75.65/26] IPv6=[] ContainerID="04456d78a8dbbdad2fa0b6c6ca446387f515569ae5c121509596fc13b5fd5bc5" HandleID="k8s-pod-network.04456d78a8dbbdad2fa0b6c6ca446387f515569ae5c121509596fc13b5fd5bc5" Workload="172--238--189--76-k8s-whisker--76bc6d445b--mr5pv-eth0" Apr 17 23:40:04.476728 containerd[1456]: 2026-04-17 23:40:04.448 [INFO][3919] cni-plugin/k8s.go 418: Populated endpoint ContainerID="04456d78a8dbbdad2fa0b6c6ca446387f515569ae5c121509596fc13b5fd5bc5" Namespace="calico-system" Pod="whisker-76bc6d445b-mr5pv" WorkloadEndpoint="172--238--189--76-k8s-whisker--76bc6d445b--mr5pv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-whisker--76bc6d445b--mr5pv-eth0", GenerateName:"whisker-76bc6d445b-", Namespace:"calico-system", SelfLink:"", UID:"80f13afd-e09a-403a-973f-ab60df4136ab", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 40, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76bc6d445b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"", Pod:"whisker-76bc6d445b-mr5pv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.75.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0b97d47728a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:04.476728 containerd[1456]: 2026-04-17 23:40:04.448 [INFO][3919] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.65/32] ContainerID="04456d78a8dbbdad2fa0b6c6ca446387f515569ae5c121509596fc13b5fd5bc5" Namespace="calico-system" Pod="whisker-76bc6d445b-mr5pv" WorkloadEndpoint="172--238--189--76-k8s-whisker--76bc6d445b--mr5pv-eth0" Apr 17 23:40:04.476728 containerd[1456]: 2026-04-17 23:40:04.448 [INFO][3919] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b97d47728a ContainerID="04456d78a8dbbdad2fa0b6c6ca446387f515569ae5c121509596fc13b5fd5bc5" Namespace="calico-system" Pod="whisker-76bc6d445b-mr5pv" WorkloadEndpoint="172--238--189--76-k8s-whisker--76bc6d445b--mr5pv-eth0" Apr 17 23:40:04.476728 containerd[1456]: 2026-04-17 23:40:04.458 [INFO][3919] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="04456d78a8dbbdad2fa0b6c6ca446387f515569ae5c121509596fc13b5fd5bc5" Namespace="calico-system" Pod="whisker-76bc6d445b-mr5pv" WorkloadEndpoint="172--238--189--76-k8s-whisker--76bc6d445b--mr5pv-eth0" Apr 17 23:40:04.476728 containerd[1456]: 2026-04-17 23:40:04.458 [INFO][3919] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="04456d78a8dbbdad2fa0b6c6ca446387f515569ae5c121509596fc13b5fd5bc5" Namespace="calico-system" Pod="whisker-76bc6d445b-mr5pv" WorkloadEndpoint="172--238--189--76-k8s-whisker--76bc6d445b--mr5pv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-whisker--76bc6d445b--mr5pv-eth0", GenerateName:"whisker-76bc6d445b-", Namespace:"calico-system", SelfLink:"", UID:"80f13afd-e09a-403a-973f-ab60df4136ab", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 40, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76bc6d445b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"04456d78a8dbbdad2fa0b6c6ca446387f515569ae5c121509596fc13b5fd5bc5", Pod:"whisker-76bc6d445b-mr5pv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.75.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0b97d47728a", MAC:"6a:df:00:46:f2:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:04.476728 containerd[1456]: 2026-04-17 23:40:04.471 [INFO][3919] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="04456d78a8dbbdad2fa0b6c6ca446387f515569ae5c121509596fc13b5fd5bc5" Namespace="calico-system" Pod="whisker-76bc6d445b-mr5pv" WorkloadEndpoint="172--238--189--76-k8s-whisker--76bc6d445b--mr5pv-eth0" Apr 17 23:40:04.503852 containerd[1456]: time="2026-04-17T23:40:04.503771821Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:40:04.503935 containerd[1456]: time="2026-04-17T23:40:04.503868111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:40:04.503935 containerd[1456]: time="2026-04-17T23:40:04.503901991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:04.504029 containerd[1456]: time="2026-04-17T23:40:04.503991861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:04.534019 systemd[1]: Started cri-containerd-04456d78a8dbbdad2fa0b6c6ca446387f515569ae5c121509596fc13b5fd5bc5.scope - libcontainer container 04456d78a8dbbdad2fa0b6c6ca446387f515569ae5c121509596fc13b5fd5bc5. Apr 17 23:40:04.581964 containerd[1456]: time="2026-04-17T23:40:04.581855816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76bc6d445b-mr5pv,Uid:80f13afd-e09a-403a-973f-ab60df4136ab,Namespace:calico-system,Attempt:0,} returns sandbox id \"04456d78a8dbbdad2fa0b6c6ca446387f515569ae5c121509596fc13b5fd5bc5\"" Apr 17 23:40:04.584508 containerd[1456]: time="2026-04-17T23:40:04.584302403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 17 23:40:04.767508 kubelet[2540]: I0417 23:40:04.766635 2540 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fea8698d-451a-456c-836e-d755be496f91" path="/var/lib/kubelet/pods/fea8698d-451a-456c-836e-d755be496f91/volumes" Apr 17 23:40:05.369679 containerd[1456]: time="2026-04-17T23:40:05.369631550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:05.370747 containerd[1456]: time="2026-04-17T23:40:05.370573379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 17 23:40:05.372505 containerd[1456]: time="2026-04-17T23:40:05.371264429Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:05.373725 containerd[1456]: time="2026-04-17T23:40:05.373240047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:05.374394 containerd[1456]: time="2026-04-17T23:40:05.374353276Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 790.026103ms" Apr 17 23:40:05.374431 containerd[1456]: time="2026-04-17T23:40:05.374391906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 17 23:40:05.378189 containerd[1456]: time="2026-04-17T23:40:05.377842793Z" level=info msg="CreateContainer within sandbox \"04456d78a8dbbdad2fa0b6c6ca446387f515569ae5c121509596fc13b5fd5bc5\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 23:40:05.388046 containerd[1456]: time="2026-04-17T23:40:05.388012064Z" level=info msg="CreateContainer within sandbox \"04456d78a8dbbdad2fa0b6c6ca446387f515569ae5c121509596fc13b5fd5bc5\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"f4c52fdd0f0edcfedbc2c8553e2386606b45f15ae7bc7b2e2d679e54ab1426e3\"" Apr 17 23:40:05.392891 containerd[1456]: time="2026-04-17T23:40:05.391094911Z" level=info msg="StartContainer for \"f4c52fdd0f0edcfedbc2c8553e2386606b45f15ae7bc7b2e2d679e54ab1426e3\"" Apr 17 23:40:05.430665 systemd[1]: run-containerd-runc-k8s.io-f4c52fdd0f0edcfedbc2c8553e2386606b45f15ae7bc7b2e2d679e54ab1426e3-runc.qnZftZ.mount: Deactivated successfully. Apr 17 23:40:05.438859 systemd[1]: Started cri-containerd-f4c52fdd0f0edcfedbc2c8553e2386606b45f15ae7bc7b2e2d679e54ab1426e3.scope - libcontainer container f4c52fdd0f0edcfedbc2c8553e2386606b45f15ae7bc7b2e2d679e54ab1426e3. Apr 17 23:40:05.489577 containerd[1456]: time="2026-04-17T23:40:05.489541362Z" level=info msg="StartContainer for \"f4c52fdd0f0edcfedbc2c8553e2386606b45f15ae7bc7b2e2d679e54ab1426e3\" returns successfully" Apr 17 23:40:05.492788 containerd[1456]: time="2026-04-17T23:40:05.492456530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 17 23:40:06.412829 systemd-networkd[1376]: cali0b97d47728a: Gained IPv6LL Apr 17 23:40:06.970466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1236418643.mount: Deactivated successfully. Apr 17 23:40:06.979859 containerd[1456]: time="2026-04-17T23:40:06.979829333Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:06.980587 containerd[1456]: time="2026-04-17T23:40:06.980545723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 17 23:40:06.981096 containerd[1456]: time="2026-04-17T23:40:06.981056912Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:06.982801 containerd[1456]: time="2026-04-17T23:40:06.982714691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:06.983411 containerd[1456]: time="2026-04-17T23:40:06.983383320Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.4908964s" Apr 17 23:40:06.983455 containerd[1456]: time="2026-04-17T23:40:06.983412340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 17 23:40:06.986248 containerd[1456]: time="2026-04-17T23:40:06.986117568Z" level=info msg="CreateContainer within sandbox \"04456d78a8dbbdad2fa0b6c6ca446387f515569ae5c121509596fc13b5fd5bc5\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 23:40:06.999717 containerd[1456]: time="2026-04-17T23:40:06.999305327Z" level=info msg="CreateContainer within sandbox \"04456d78a8dbbdad2fa0b6c6ca446387f515569ae5c121509596fc13b5fd5bc5\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"af3b458ee9ac58d956cc6660aa308b4a0117cf82207a4b8c840dff1f05674972\"" Apr 17 23:40:07.000374 containerd[1456]: time="2026-04-17T23:40:07.000344636Z" level=info msg="StartContainer for \"af3b458ee9ac58d956cc6660aa308b4a0117cf82207a4b8c840dff1f05674972\"" Apr 17 23:40:07.036827 systemd[1]: Started cri-containerd-af3b458ee9ac58d956cc6660aa308b4a0117cf82207a4b8c840dff1f05674972.scope - libcontainer container af3b458ee9ac58d956cc6660aa308b4a0117cf82207a4b8c840dff1f05674972. Apr 17 23:40:07.078071 containerd[1456]: time="2026-04-17T23:40:07.078026125Z" level=info msg="StartContainer for \"af3b458ee9ac58d956cc6660aa308b4a0117cf82207a4b8c840dff1f05674972\" returns successfully" Apr 17 23:40:08.616528 kubelet[2540]: I0417 23:40:08.615631 2540 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:40:08.636531 systemd[1]: run-containerd-runc-k8s.io-091bb9b76b32612ca0cdbad784ad26d50a3fc1766f3a5848bc91509bfe68ec97-runc.g1rJM1.mount: Deactivated successfully. Apr 17 23:40:08.724754 systemd[1]: run-containerd-runc-k8s.io-091bb9b76b32612ca0cdbad784ad26d50a3fc1766f3a5848bc91509bfe68ec97-runc.p7PYu8.mount: Deactivated successfully. Apr 17 23:40:12.765897 containerd[1456]: time="2026-04-17T23:40:12.764973351Z" level=info msg="StopPodSandbox for \"7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378\"" Apr 17 23:40:12.815393 kubelet[2540]: I0417 23:40:12.815311 2540 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-76bc6d445b-mr5pv" podStartSLOduration=6.414601356 podStartE2EDuration="8.815296372s" podCreationTimestamp="2026-04-17 23:40:04 +0000 UTC" firstStartedPulling="2026-04-17 23:40:04.583599914 +0000 UTC m=+27.917497291" lastFinishedPulling="2026-04-17 23:40:06.98429494 +0000 UTC m=+30.318192307" observedRunningTime="2026-04-17 23:40:07.983791487 +0000 UTC m=+31.317688854" watchObservedRunningTime="2026-04-17 23:40:12.815296372 +0000 UTC m=+36.149193739" Apr 17 23:40:12.854562 containerd[1456]: 2026-04-17 23:40:12.816 [INFO][4290] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" Apr 17 23:40:12.854562 containerd[1456]: 2026-04-17 23:40:12.816 [INFO][4290] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" iface="eth0" netns="/var/run/netns/cni-968137bf-c71b-41fa-bfeb-c4c8dc052441" Apr 17 23:40:12.854562 containerd[1456]: 2026-04-17 23:40:12.816 [INFO][4290] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" iface="eth0" netns="/var/run/netns/cni-968137bf-c71b-41fa-bfeb-c4c8dc052441" Apr 17 23:40:12.854562 containerd[1456]: 2026-04-17 23:40:12.817 [INFO][4290] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" iface="eth0" netns="/var/run/netns/cni-968137bf-c71b-41fa-bfeb-c4c8dc052441" Apr 17 23:40:12.854562 containerd[1456]: 2026-04-17 23:40:12.817 [INFO][4290] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" Apr 17 23:40:12.854562 containerd[1456]: 2026-04-17 23:40:12.817 [INFO][4290] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" Apr 17 23:40:12.854562 containerd[1456]: 2026-04-17 23:40:12.840 [INFO][4297] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" HandleID="k8s-pod-network.7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" Workload="172--238--189--76-k8s-goldmane--cccfbd5cf--hxd5z-eth0" Apr 17 23:40:12.854562 containerd[1456]: 2026-04-17 23:40:12.840 [INFO][4297] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:12.854562 containerd[1456]: 2026-04-17 23:40:12.840 [INFO][4297] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:12.854562 containerd[1456]: 2026-04-17 23:40:12.847 [WARNING][4297] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" HandleID="k8s-pod-network.7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" Workload="172--238--189--76-k8s-goldmane--cccfbd5cf--hxd5z-eth0" Apr 17 23:40:12.854562 containerd[1456]: 2026-04-17 23:40:12.847 [INFO][4297] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" HandleID="k8s-pod-network.7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" Workload="172--238--189--76-k8s-goldmane--cccfbd5cf--hxd5z-eth0" Apr 17 23:40:12.854562 containerd[1456]: 2026-04-17 23:40:12.849 [INFO][4297] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:12.854562 containerd[1456]: 2026-04-17 23:40:12.851 [INFO][4290] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" Apr 17 23:40:12.857833 containerd[1456]: time="2026-04-17T23:40:12.857802787Z" level=info msg="TearDown network for sandbox \"7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378\" successfully" Apr 17 23:40:12.857833 containerd[1456]: time="2026-04-17T23:40:12.857832427Z" level=info msg="StopPodSandbox for \"7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378\" returns successfully" Apr 17 23:40:12.859875 systemd[1]: run-netns-cni\x2d968137bf\x2dc71b\x2d41fa\x2dbfeb\x2dc4c8dc052441.mount: Deactivated successfully. Apr 17 23:40:12.861682 containerd[1456]: time="2026-04-17T23:40:12.861646875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-hxd5z,Uid:e92690a1-a6c6-4a96-8b33-2a2ebd323317,Namespace:calico-system,Attempt:1,}" Apr 17 23:40:12.959291 systemd-networkd[1376]: cali0c7c3c5d267: Link UP Apr 17 23:40:12.962657 systemd-networkd[1376]: cali0c7c3c5d267: Gained carrier Apr 17 23:40:12.978190 containerd[1456]: 2026-04-17 23:40:12.893 [ERROR][4306] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:40:12.978190 containerd[1456]: 2026-04-17 23:40:12.902 [INFO][4306] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--189--76-k8s-goldmane--cccfbd5cf--hxd5z-eth0 goldmane-cccfbd5cf- calico-system e92690a1-a6c6-4a96-8b33-2a2ebd323317 926 0 2026-04-17 23:39:50 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-238-189-76 goldmane-cccfbd5cf-hxd5z eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0c7c3c5d267 [] [] }} ContainerID="efe2704fe93c282c1bccd84fe85b5d96f217aa74848b1aec1c279edb28f08978" Namespace="calico-system" Pod="goldmane-cccfbd5cf-hxd5z" WorkloadEndpoint="172--238--189--76-k8s-goldmane--cccfbd5cf--hxd5z-" Apr 17 23:40:12.978190 containerd[1456]: 2026-04-17 23:40:12.902 [INFO][4306] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="efe2704fe93c282c1bccd84fe85b5d96f217aa74848b1aec1c279edb28f08978" Namespace="calico-system" Pod="goldmane-cccfbd5cf-hxd5z" WorkloadEndpoint="172--238--189--76-k8s-goldmane--cccfbd5cf--hxd5z-eth0" Apr 17 23:40:12.978190 containerd[1456]: 2026-04-17 23:40:12.926 [INFO][4317] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="efe2704fe93c282c1bccd84fe85b5d96f217aa74848b1aec1c279edb28f08978" HandleID="k8s-pod-network.efe2704fe93c282c1bccd84fe85b5d96f217aa74848b1aec1c279edb28f08978" Workload="172--238--189--76-k8s-goldmane--cccfbd5cf--hxd5z-eth0" Apr 17 23:40:12.978190 containerd[1456]: 2026-04-17 23:40:12.931 [INFO][4317] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="efe2704fe93c282c1bccd84fe85b5d96f217aa74848b1aec1c279edb28f08978" HandleID="k8s-pod-network.efe2704fe93c282c1bccd84fe85b5d96f217aa74848b1aec1c279edb28f08978" Workload="172--238--189--76-k8s-goldmane--cccfbd5cf--hxd5z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002efe60), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-189-76", "pod":"goldmane-cccfbd5cf-hxd5z", "timestamp":"2026-04-17 23:40:12.926510958 +0000 UTC"}, Hostname:"172-238-189-76", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003f18c0)} Apr 17 23:40:12.978190 containerd[1456]: 2026-04-17 23:40:12.931 [INFO][4317] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:12.978190 containerd[1456]: 2026-04-17 23:40:12.931 [INFO][4317] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:12.978190 containerd[1456]: 2026-04-17 23:40:12.931 [INFO][4317] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-189-76' Apr 17 23:40:12.978190 containerd[1456]: 2026-04-17 23:40:12.933 [INFO][4317] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.efe2704fe93c282c1bccd84fe85b5d96f217aa74848b1aec1c279edb28f08978" host="172-238-189-76" Apr 17 23:40:12.978190 containerd[1456]: 2026-04-17 23:40:12.936 [INFO][4317] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-189-76" Apr 17 23:40:12.978190 containerd[1456]: 2026-04-17 23:40:12.940 [INFO][4317] ipam/ipam.go 526: Trying affinity for 192.168.75.64/26 host="172-238-189-76" Apr 17 23:40:12.978190 containerd[1456]: 2026-04-17 23:40:12.941 [INFO][4317] ipam/ipam.go 160: Attempting to load block cidr=192.168.75.64/26 host="172-238-189-76" Apr 17 23:40:12.978190 containerd[1456]: 2026-04-17 23:40:12.943 [INFO][4317] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.75.64/26 host="172-238-189-76" Apr 17 23:40:12.978190 containerd[1456]: 2026-04-17 23:40:12.943 [INFO][4317] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.75.64/26 handle="k8s-pod-network.efe2704fe93c282c1bccd84fe85b5d96f217aa74848b1aec1c279edb28f08978" host="172-238-189-76" Apr 17 23:40:12.978190 containerd[1456]: 2026-04-17 23:40:12.944 [INFO][4317] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.efe2704fe93c282c1bccd84fe85b5d96f217aa74848b1aec1c279edb28f08978 Apr 17 23:40:12.978190 containerd[1456]: 2026-04-17 23:40:12.947 [INFO][4317] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.75.64/26 handle="k8s-pod-network.efe2704fe93c282c1bccd84fe85b5d96f217aa74848b1aec1c279edb28f08978" host="172-238-189-76" Apr 17 23:40:12.978190 containerd[1456]: 2026-04-17 23:40:12.952 [INFO][4317] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.75.66/26] block=192.168.75.64/26 handle="k8s-pod-network.efe2704fe93c282c1bccd84fe85b5d96f217aa74848b1aec1c279edb28f08978" host="172-238-189-76" Apr 17 23:40:12.978190 containerd[1456]: 2026-04-17 23:40:12.952 [INFO][4317] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.75.66/26] handle="k8s-pod-network.efe2704fe93c282c1bccd84fe85b5d96f217aa74848b1aec1c279edb28f08978" host="172-238-189-76" Apr 17 23:40:12.978190 containerd[1456]: 2026-04-17 23:40:12.952 [INFO][4317] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:12.978190 containerd[1456]: 2026-04-17 23:40:12.952 [INFO][4317] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.75.66/26] IPv6=[] ContainerID="efe2704fe93c282c1bccd84fe85b5d96f217aa74848b1aec1c279edb28f08978" HandleID="k8s-pod-network.efe2704fe93c282c1bccd84fe85b5d96f217aa74848b1aec1c279edb28f08978" Workload="172--238--189--76-k8s-goldmane--cccfbd5cf--hxd5z-eth0" Apr 17 23:40:12.978684 containerd[1456]: 2026-04-17 23:40:12.955 [INFO][4306] cni-plugin/k8s.go 418: Populated endpoint ContainerID="efe2704fe93c282c1bccd84fe85b5d96f217aa74848b1aec1c279edb28f08978" Namespace="calico-system" Pod="goldmane-cccfbd5cf-hxd5z" WorkloadEndpoint="172--238--189--76-k8s-goldmane--cccfbd5cf--hxd5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-goldmane--cccfbd5cf--hxd5z-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"e92690a1-a6c6-4a96-8b33-2a2ebd323317", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"", Pod:"goldmane-cccfbd5cf-hxd5z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0c7c3c5d267", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:12.978684 containerd[1456]: 2026-04-17 23:40:12.955 [INFO][4306] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.66/32] ContainerID="efe2704fe93c282c1bccd84fe85b5d96f217aa74848b1aec1c279edb28f08978" Namespace="calico-system" Pod="goldmane-cccfbd5cf-hxd5z" WorkloadEndpoint="172--238--189--76-k8s-goldmane--cccfbd5cf--hxd5z-eth0" Apr 17 23:40:12.978684 containerd[1456]: 2026-04-17 23:40:12.955 [INFO][4306] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c7c3c5d267 ContainerID="efe2704fe93c282c1bccd84fe85b5d96f217aa74848b1aec1c279edb28f08978" Namespace="calico-system" Pod="goldmane-cccfbd5cf-hxd5z" WorkloadEndpoint="172--238--189--76-k8s-goldmane--cccfbd5cf--hxd5z-eth0" Apr 17 23:40:12.978684 containerd[1456]: 2026-04-17 23:40:12.963 [INFO][4306] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="efe2704fe93c282c1bccd84fe85b5d96f217aa74848b1aec1c279edb28f08978" Namespace="calico-system" Pod="goldmane-cccfbd5cf-hxd5z" WorkloadEndpoint="172--238--189--76-k8s-goldmane--cccfbd5cf--hxd5z-eth0" Apr 17 23:40:12.978684 containerd[1456]: 2026-04-17 23:40:12.965 [INFO][4306] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="efe2704fe93c282c1bccd84fe85b5d96f217aa74848b1aec1c279edb28f08978" Namespace="calico-system" Pod="goldmane-cccfbd5cf-hxd5z" WorkloadEndpoint="172--238--189--76-k8s-goldmane--cccfbd5cf--hxd5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-goldmane--cccfbd5cf--hxd5z-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"e92690a1-a6c6-4a96-8b33-2a2ebd323317", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"efe2704fe93c282c1bccd84fe85b5d96f217aa74848b1aec1c279edb28f08978", Pod:"goldmane-cccfbd5cf-hxd5z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0c7c3c5d267", MAC:"12:1f:d4:4f:71:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:12.978684 containerd[1456]: 2026-04-17 23:40:12.974 [INFO][4306] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="efe2704fe93c282c1bccd84fe85b5d96f217aa74848b1aec1c279edb28f08978" Namespace="calico-system" Pod="goldmane-cccfbd5cf-hxd5z" WorkloadEndpoint="172--238--189--76-k8s-goldmane--cccfbd5cf--hxd5z-eth0" Apr 17 23:40:13.002191 containerd[1456]: time="2026-04-17T23:40:13.002049114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:40:13.002882 containerd[1456]: time="2026-04-17T23:40:13.002119524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:40:13.002882 containerd[1456]: time="2026-04-17T23:40:13.002861714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:13.003039 containerd[1456]: time="2026-04-17T23:40:13.002959234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:13.025499 systemd[1]: run-containerd-runc-k8s.io-efe2704fe93c282c1bccd84fe85b5d96f217aa74848b1aec1c279edb28f08978-runc.okmuFA.mount: Deactivated successfully. Apr 17 23:40:13.036812 systemd[1]: Started cri-containerd-efe2704fe93c282c1bccd84fe85b5d96f217aa74848b1aec1c279edb28f08978.scope - libcontainer container efe2704fe93c282c1bccd84fe85b5d96f217aa74848b1aec1c279edb28f08978. Apr 17 23:40:13.086136 containerd[1456]: time="2026-04-17T23:40:13.085151470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-hxd5z,Uid:e92690a1-a6c6-4a96-8b33-2a2ebd323317,Namespace:calico-system,Attempt:1,} returns sandbox id \"efe2704fe93c282c1bccd84fe85b5d96f217aa74848b1aec1c279edb28f08978\"" Apr 17 23:40:13.088580 containerd[1456]: time="2026-04-17T23:40:13.088458938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 17 23:40:13.768357 containerd[1456]: time="2026-04-17T23:40:13.768277162Z" level=info msg="StopPodSandbox for \"5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914\"" Apr 17 23:40:13.770898 containerd[1456]: time="2026-04-17T23:40:13.770730441Z" level=info msg="StopPodSandbox for \"918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046\"" Apr 17 23:40:13.951718 containerd[1456]: 2026-04-17 23:40:13.897 [INFO][4411] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" Apr 17 23:40:13.951718 containerd[1456]: 2026-04-17 23:40:13.897 [INFO][4411] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" iface="eth0" netns="/var/run/netns/cni-c52f2aee-eac4-cf00-0a6e-73808b99b44f" Apr 17 23:40:13.951718 containerd[1456]: 2026-04-17 23:40:13.897 [INFO][4411] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" iface="eth0" netns="/var/run/netns/cni-c52f2aee-eac4-cf00-0a6e-73808b99b44f" Apr 17 23:40:13.951718 containerd[1456]: 2026-04-17 23:40:13.898 [INFO][4411] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" iface="eth0" netns="/var/run/netns/cni-c52f2aee-eac4-cf00-0a6e-73808b99b44f" Apr 17 23:40:13.951718 containerd[1456]: 2026-04-17 23:40:13.898 [INFO][4411] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" Apr 17 23:40:13.951718 containerd[1456]: 2026-04-17 23:40:13.898 [INFO][4411] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" Apr 17 23:40:13.951718 containerd[1456]: 2026-04-17 23:40:13.931 [INFO][4426] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" HandleID="k8s-pod-network.5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" Workload="172--238--189--76-k8s-calico--kube--controllers--5d5d6df97d--qjfqb-eth0" Apr 17 23:40:13.951718 containerd[1456]: 2026-04-17 23:40:13.931 [INFO][4426] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:13.951718 containerd[1456]: 2026-04-17 23:40:13.931 [INFO][4426] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:13.951718 containerd[1456]: 2026-04-17 23:40:13.937 [WARNING][4426] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" HandleID="k8s-pod-network.5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" Workload="172--238--189--76-k8s-calico--kube--controllers--5d5d6df97d--qjfqb-eth0" Apr 17 23:40:13.951718 containerd[1456]: 2026-04-17 23:40:13.937 [INFO][4426] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" HandleID="k8s-pod-network.5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" Workload="172--238--189--76-k8s-calico--kube--controllers--5d5d6df97d--qjfqb-eth0" Apr 17 23:40:13.951718 containerd[1456]: 2026-04-17 23:40:13.939 [INFO][4426] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:13.951718 containerd[1456]: 2026-04-17 23:40:13.944 [INFO][4411] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" Apr 17 23:40:13.954264 containerd[1456]: time="2026-04-17T23:40:13.952643443Z" level=info msg="TearDown network for sandbox \"5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914\" successfully" Apr 17 23:40:13.954264 containerd[1456]: time="2026-04-17T23:40:13.952669693Z" level=info msg="StopPodSandbox for \"5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914\" returns successfully" Apr 17 23:40:13.956524 systemd[1]: run-netns-cni\x2dc52f2aee\x2deac4\x2dcf00\x2d0a6e\x2d73808b99b44f.mount: Deactivated successfully. Apr 17 23:40:13.958530 containerd[1456]: time="2026-04-17T23:40:13.958058870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d5d6df97d-qjfqb,Uid:b465cf39-1a7a-43c8-8b20-c06a445d067b,Namespace:calico-system,Attempt:1,}" Apr 17 23:40:13.959530 containerd[1456]: 2026-04-17 23:40:13.893 [INFO][4410] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" Apr 17 23:40:13.959530 containerd[1456]: 2026-04-17 23:40:13.893 [INFO][4410] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" iface="eth0" netns="/var/run/netns/cni-f10c01b3-cc1e-588c-37d3-f52d54abe32e" Apr 17 23:40:13.959530 containerd[1456]: 2026-04-17 23:40:13.894 [INFO][4410] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" iface="eth0" netns="/var/run/netns/cni-f10c01b3-cc1e-588c-37d3-f52d54abe32e" Apr 17 23:40:13.959530 containerd[1456]: 2026-04-17 23:40:13.894 [INFO][4410] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" iface="eth0" netns="/var/run/netns/cni-f10c01b3-cc1e-588c-37d3-f52d54abe32e" Apr 17 23:40:13.959530 containerd[1456]: 2026-04-17 23:40:13.894 [INFO][4410] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" Apr 17 23:40:13.959530 containerd[1456]: 2026-04-17 23:40:13.894 [INFO][4410] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" Apr 17 23:40:13.959530 containerd[1456]: 2026-04-17 23:40:13.931 [INFO][4424] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" HandleID="k8s-pod-network.918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" Workload="172--238--189--76-k8s-calico--apiserver--747d4d9564--p57qj-eth0" Apr 17 23:40:13.959530 containerd[1456]: 2026-04-17 23:40:13.931 [INFO][4424] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:13.959530 containerd[1456]: 2026-04-17 23:40:13.939 [INFO][4424] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:13.959530 containerd[1456]: 2026-04-17 23:40:13.945 [WARNING][4424] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" HandleID="k8s-pod-network.918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" Workload="172--238--189--76-k8s-calico--apiserver--747d4d9564--p57qj-eth0" Apr 17 23:40:13.959530 containerd[1456]: 2026-04-17 23:40:13.945 [INFO][4424] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" HandleID="k8s-pod-network.918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" Workload="172--238--189--76-k8s-calico--apiserver--747d4d9564--p57qj-eth0" Apr 17 23:40:13.959530 containerd[1456]: 2026-04-17 23:40:13.946 [INFO][4424] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:13.959530 containerd[1456]: 2026-04-17 23:40:13.952 [INFO][4410] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" Apr 17 23:40:13.960047 containerd[1456]: time="2026-04-17T23:40:13.959681029Z" level=info msg="TearDown network for sandbox \"918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046\" successfully" Apr 17 23:40:13.960047 containerd[1456]: time="2026-04-17T23:40:13.959727969Z" level=info msg="StopPodSandbox for \"918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046\" returns successfully" Apr 17 23:40:13.963441 containerd[1456]: time="2026-04-17T23:40:13.963407757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747d4d9564-p57qj,Uid:a9d870f4-95e3-4941-9a77-e1b80afca9bd,Namespace:calico-system,Attempt:1,}" Apr 17 23:40:13.965114 systemd[1]: run-netns-cni\x2df10c01b3\x2dcc1e\x2d588c\x2d37d3\x2df52d54abe32e.mount: Deactivated successfully. Apr 17 23:40:14.147771 systemd-networkd[1376]: cali27081aead17: Link UP Apr 17 23:40:14.148051 systemd-networkd[1376]: cali27081aead17: Gained carrier Apr 17 23:40:14.171814 containerd[1456]: 2026-04-17 23:40:14.017 [ERROR][4442] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:40:14.171814 containerd[1456]: 2026-04-17 23:40:14.034 [INFO][4442] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--189--76-k8s-calico--kube--controllers--5d5d6df97d--qjfqb-eth0 calico-kube-controllers-5d5d6df97d- calico-system b465cf39-1a7a-43c8-8b20-c06a445d067b 936 0 2026-04-17 23:39:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d5d6df97d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-238-189-76 calico-kube-controllers-5d5d6df97d-qjfqb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali27081aead17 [] [] }} ContainerID="2ca52704a28ec80e90353ecffeabb9fe0b1393210ed4c0f1dd4fee73f627e494" Namespace="calico-system" Pod="calico-kube-controllers-5d5d6df97d-qjfqb" WorkloadEndpoint="172--238--189--76-k8s-calico--kube--controllers--5d5d6df97d--qjfqb-" Apr 17 23:40:14.171814 containerd[1456]: 2026-04-17 23:40:14.034 [INFO][4442] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2ca52704a28ec80e90353ecffeabb9fe0b1393210ed4c0f1dd4fee73f627e494" Namespace="calico-system" Pod="calico-kube-controllers-5d5d6df97d-qjfqb" WorkloadEndpoint="172--238--189--76-k8s-calico--kube--controllers--5d5d6df97d--qjfqb-eth0" Apr 17 23:40:14.171814 containerd[1456]: 2026-04-17 23:40:14.088 [INFO][4459] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ca52704a28ec80e90353ecffeabb9fe0b1393210ed4c0f1dd4fee73f627e494" HandleID="k8s-pod-network.2ca52704a28ec80e90353ecffeabb9fe0b1393210ed4c0f1dd4fee73f627e494" Workload="172--238--189--76-k8s-calico--kube--controllers--5d5d6df97d--qjfqb-eth0" Apr 17 23:40:14.171814 containerd[1456]: 2026-04-17 23:40:14.101 [INFO][4459] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2ca52704a28ec80e90353ecffeabb9fe0b1393210ed4c0f1dd4fee73f627e494" HandleID="k8s-pod-network.2ca52704a28ec80e90353ecffeabb9fe0b1393210ed4c0f1dd4fee73f627e494" Workload="172--238--189--76-k8s-calico--kube--controllers--5d5d6df97d--qjfqb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde90), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-189-76", "pod":"calico-kube-controllers-5d5d6df97d-qjfqb", "timestamp":"2026-04-17 23:40:14.088807843 +0000 UTC"}, Hostname:"172-238-189-76", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000188580)} Apr 17 23:40:14.171814 containerd[1456]: 2026-04-17 23:40:14.101 [INFO][4459] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:14.171814 containerd[1456]: 2026-04-17 23:40:14.101 [INFO][4459] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:14.171814 containerd[1456]: 2026-04-17 23:40:14.101 [INFO][4459] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-189-76' Apr 17 23:40:14.171814 containerd[1456]: 2026-04-17 23:40:14.104 [INFO][4459] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2ca52704a28ec80e90353ecffeabb9fe0b1393210ed4c0f1dd4fee73f627e494" host="172-238-189-76" Apr 17 23:40:14.171814 containerd[1456]: 2026-04-17 23:40:14.109 [INFO][4459] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-189-76" Apr 17 23:40:14.171814 containerd[1456]: 2026-04-17 23:40:14.116 [INFO][4459] ipam/ipam.go 526: Trying affinity for 192.168.75.64/26 host="172-238-189-76" Apr 17 23:40:14.171814 containerd[1456]: 2026-04-17 23:40:14.119 [INFO][4459] ipam/ipam.go 160: Attempting to load block cidr=192.168.75.64/26 host="172-238-189-76" Apr 17 23:40:14.171814 containerd[1456]: 2026-04-17 23:40:14.122 [INFO][4459] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.75.64/26 host="172-238-189-76" Apr 17 23:40:14.171814 containerd[1456]: 2026-04-17 23:40:14.122 [INFO][4459] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.75.64/26 handle="k8s-pod-network.2ca52704a28ec80e90353ecffeabb9fe0b1393210ed4c0f1dd4fee73f627e494" host="172-238-189-76" Apr 17 23:40:14.171814 containerd[1456]: 2026-04-17 23:40:14.123 [INFO][4459] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2ca52704a28ec80e90353ecffeabb9fe0b1393210ed4c0f1dd4fee73f627e494 Apr 17 23:40:14.171814 containerd[1456]: 2026-04-17 23:40:14.129 [INFO][4459] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.75.64/26 handle="k8s-pod-network.2ca52704a28ec80e90353ecffeabb9fe0b1393210ed4c0f1dd4fee73f627e494" host="172-238-189-76" Apr 17 23:40:14.171814 containerd[1456]: 2026-04-17 23:40:14.135 [INFO][4459] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.75.67/26] block=192.168.75.64/26 handle="k8s-pod-network.2ca52704a28ec80e90353ecffeabb9fe0b1393210ed4c0f1dd4fee73f627e494" host="172-238-189-76" Apr 17 23:40:14.171814 containerd[1456]: 2026-04-17 23:40:14.136 [INFO][4459] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.75.67/26] handle="k8s-pod-network.2ca52704a28ec80e90353ecffeabb9fe0b1393210ed4c0f1dd4fee73f627e494" host="172-238-189-76" Apr 17 23:40:14.171814 containerd[1456]: 2026-04-17 23:40:14.136 [INFO][4459] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:14.171814 containerd[1456]: 2026-04-17 23:40:14.136 [INFO][4459] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.75.67/26] IPv6=[] ContainerID="2ca52704a28ec80e90353ecffeabb9fe0b1393210ed4c0f1dd4fee73f627e494" HandleID="k8s-pod-network.2ca52704a28ec80e90353ecffeabb9fe0b1393210ed4c0f1dd4fee73f627e494" Workload="172--238--189--76-k8s-calico--kube--controllers--5d5d6df97d--qjfqb-eth0" Apr 17 23:40:14.172467 containerd[1456]: 2026-04-17 23:40:14.138 [INFO][4442] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2ca52704a28ec80e90353ecffeabb9fe0b1393210ed4c0f1dd4fee73f627e494" Namespace="calico-system" Pod="calico-kube-controllers-5d5d6df97d-qjfqb" WorkloadEndpoint="172--238--189--76-k8s-calico--kube--controllers--5d5d6df97d--qjfqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-calico--kube--controllers--5d5d6df97d--qjfqb-eth0", GenerateName:"calico-kube-controllers-5d5d6df97d-", Namespace:"calico-system", SelfLink:"", UID:"b465cf39-1a7a-43c8-8b20-c06a445d067b", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d5d6df97d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"", Pod:"calico-kube-controllers-5d5d6df97d-qjfqb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali27081aead17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:14.172467 containerd[1456]: 2026-04-17 23:40:14.139 [INFO][4442] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.67/32] ContainerID="2ca52704a28ec80e90353ecffeabb9fe0b1393210ed4c0f1dd4fee73f627e494" Namespace="calico-system" Pod="calico-kube-controllers-5d5d6df97d-qjfqb" WorkloadEndpoint="172--238--189--76-k8s-calico--kube--controllers--5d5d6df97d--qjfqb-eth0" Apr 17 23:40:14.172467 containerd[1456]: 2026-04-17 23:40:14.139 [INFO][4442] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27081aead17 ContainerID="2ca52704a28ec80e90353ecffeabb9fe0b1393210ed4c0f1dd4fee73f627e494" Namespace="calico-system" Pod="calico-kube-controllers-5d5d6df97d-qjfqb" WorkloadEndpoint="172--238--189--76-k8s-calico--kube--controllers--5d5d6df97d--qjfqb-eth0" Apr 17 23:40:14.172467 containerd[1456]: 2026-04-17 23:40:14.147 [INFO][4442] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ca52704a28ec80e90353ecffeabb9fe0b1393210ed4c0f1dd4fee73f627e494" Namespace="calico-system" Pod="calico-kube-controllers-5d5d6df97d-qjfqb" WorkloadEndpoint="172--238--189--76-k8s-calico--kube--controllers--5d5d6df97d--qjfqb-eth0" Apr 17 23:40:14.172467 containerd[1456]: 2026-04-17 23:40:14.147 [INFO][4442] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2ca52704a28ec80e90353ecffeabb9fe0b1393210ed4c0f1dd4fee73f627e494" Namespace="calico-system" Pod="calico-kube-controllers-5d5d6df97d-qjfqb" WorkloadEndpoint="172--238--189--76-k8s-calico--kube--controllers--5d5d6df97d--qjfqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-calico--kube--controllers--5d5d6df97d--qjfqb-eth0", GenerateName:"calico-kube-controllers-5d5d6df97d-", Namespace:"calico-system", SelfLink:"", UID:"b465cf39-1a7a-43c8-8b20-c06a445d067b", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d5d6df97d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"2ca52704a28ec80e90353ecffeabb9fe0b1393210ed4c0f1dd4fee73f627e494", Pod:"calico-kube-controllers-5d5d6df97d-qjfqb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali27081aead17", MAC:"9e:e9:3f:77:68:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:14.172467 containerd[1456]: 2026-04-17 23:40:14.163 [INFO][4442] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2ca52704a28ec80e90353ecffeabb9fe0b1393210ed4c0f1dd4fee73f627e494" Namespace="calico-system" Pod="calico-kube-controllers-5d5d6df97d-qjfqb" WorkloadEndpoint="172--238--189--76-k8s-calico--kube--controllers--5d5d6df97d--qjfqb-eth0" Apr 17 23:40:14.241034 containerd[1456]: time="2026-04-17T23:40:14.239834036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:40:14.241034 containerd[1456]: time="2026-04-17T23:40:14.240224266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:40:14.241034 containerd[1456]: time="2026-04-17T23:40:14.240240016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:14.241034 containerd[1456]: time="2026-04-17T23:40:14.240405266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:14.277059 systemd[1]: Started cri-containerd-2ca52704a28ec80e90353ecffeabb9fe0b1393210ed4c0f1dd4fee73f627e494.scope - libcontainer container 2ca52704a28ec80e90353ecffeabb9fe0b1393210ed4c0f1dd4fee73f627e494. Apr 17 23:40:14.285827 systemd-networkd[1376]: calibc751cbdbd2: Link UP Apr 17 23:40:14.287202 systemd-networkd[1376]: calibc751cbdbd2: Gained carrier Apr 17 23:40:14.317858 containerd[1456]: 2026-04-17 23:40:14.046 [ERROR][4448] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:40:14.317858 containerd[1456]: 2026-04-17 23:40:14.061 [INFO][4448] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--189--76-k8s-calico--apiserver--747d4d9564--p57qj-eth0 calico-apiserver-747d4d9564- calico-system a9d870f4-95e3-4941-9a77-e1b80afca9bd 935 0 2026-04-17 23:39:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:747d4d9564 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-238-189-76 calico-apiserver-747d4d9564-p57qj eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calibc751cbdbd2 [] [] }} ContainerID="20244cad104fedd5a1b6c253e3a236e073902ed78b3011430e560dbf25fa7082" Namespace="calico-system" Pod="calico-apiserver-747d4d9564-p57qj" WorkloadEndpoint="172--238--189--76-k8s-calico--apiserver--747d4d9564--p57qj-" Apr 17 23:40:14.317858 containerd[1456]: 2026-04-17 23:40:14.061 [INFO][4448] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="20244cad104fedd5a1b6c253e3a236e073902ed78b3011430e560dbf25fa7082" Namespace="calico-system" Pod="calico-apiserver-747d4d9564-p57qj" WorkloadEndpoint="172--238--189--76-k8s-calico--apiserver--747d4d9564--p57qj-eth0" Apr 17 23:40:14.317858 containerd[1456]: 2026-04-17 23:40:14.095 [INFO][4467] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20244cad104fedd5a1b6c253e3a236e073902ed78b3011430e560dbf25fa7082" HandleID="k8s-pod-network.20244cad104fedd5a1b6c253e3a236e073902ed78b3011430e560dbf25fa7082" Workload="172--238--189--76-k8s-calico--apiserver--747d4d9564--p57qj-eth0" Apr 17 23:40:14.317858 containerd[1456]: 2026-04-17 23:40:14.104 [INFO][4467] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="20244cad104fedd5a1b6c253e3a236e073902ed78b3011430e560dbf25fa7082" HandleID="k8s-pod-network.20244cad104fedd5a1b6c253e3a236e073902ed78b3011430e560dbf25fa7082" Workload="172--238--189--76-k8s-calico--apiserver--747d4d9564--p57qj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277a60), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-189-76", "pod":"calico-apiserver-747d4d9564-p57qj", "timestamp":"2026-04-17 23:40:14.095560399 +0000 UTC"}, Hostname:"172-238-189-76", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000202dc0)} Apr 17 23:40:14.317858 containerd[1456]: 2026-04-17 23:40:14.104 [INFO][4467] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:14.317858 containerd[1456]: 2026-04-17 23:40:14.136 [INFO][4467] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:14.317858 containerd[1456]: 2026-04-17 23:40:14.136 [INFO][4467] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-189-76' Apr 17 23:40:14.317858 containerd[1456]: 2026-04-17 23:40:14.206 [INFO][4467] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.20244cad104fedd5a1b6c253e3a236e073902ed78b3011430e560dbf25fa7082" host="172-238-189-76" Apr 17 23:40:14.317858 containerd[1456]: 2026-04-17 23:40:14.211 [INFO][4467] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-189-76" Apr 17 23:40:14.317858 containerd[1456]: 2026-04-17 23:40:14.225 [INFO][4467] ipam/ipam.go 526: Trying affinity for 192.168.75.64/26 host="172-238-189-76" Apr 17 23:40:14.317858 containerd[1456]: 2026-04-17 23:40:14.226 [INFO][4467] ipam/ipam.go 160: Attempting to load block cidr=192.168.75.64/26 host="172-238-189-76" Apr 17 23:40:14.317858 containerd[1456]: 2026-04-17 23:40:14.233 [INFO][4467] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.75.64/26 host="172-238-189-76" Apr 17 23:40:14.317858 containerd[1456]: 2026-04-17 23:40:14.233 [INFO][4467] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.75.64/26 handle="k8s-pod-network.20244cad104fedd5a1b6c253e3a236e073902ed78b3011430e560dbf25fa7082" host="172-238-189-76" Apr 17 23:40:14.317858 containerd[1456]: 2026-04-17 23:40:14.242 [INFO][4467] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.20244cad104fedd5a1b6c253e3a236e073902ed78b3011430e560dbf25fa7082 Apr 17 23:40:14.317858 containerd[1456]: 2026-04-17 23:40:14.264 [INFO][4467] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.75.64/26 handle="k8s-pod-network.20244cad104fedd5a1b6c253e3a236e073902ed78b3011430e560dbf25fa7082" host="172-238-189-76" Apr 17 23:40:14.317858 containerd[1456]: 2026-04-17 23:40:14.275 [INFO][4467] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.75.68/26] block=192.168.75.64/26 handle="k8s-pod-network.20244cad104fedd5a1b6c253e3a236e073902ed78b3011430e560dbf25fa7082" host="172-238-189-76" Apr 17 23:40:14.317858 containerd[1456]: 2026-04-17 23:40:14.275 [INFO][4467] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.75.68/26] handle="k8s-pod-network.20244cad104fedd5a1b6c253e3a236e073902ed78b3011430e560dbf25fa7082" host="172-238-189-76" Apr 17 23:40:14.317858 containerd[1456]: 2026-04-17 23:40:14.275 [INFO][4467] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:14.317858 containerd[1456]: 2026-04-17 23:40:14.275 [INFO][4467] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.75.68/26] IPv6=[] ContainerID="20244cad104fedd5a1b6c253e3a236e073902ed78b3011430e560dbf25fa7082" HandleID="k8s-pod-network.20244cad104fedd5a1b6c253e3a236e073902ed78b3011430e560dbf25fa7082" Workload="172--238--189--76-k8s-calico--apiserver--747d4d9564--p57qj-eth0" Apr 17 23:40:14.318397 containerd[1456]: 2026-04-17 23:40:14.280 [INFO][4448] cni-plugin/k8s.go 418: Populated endpoint ContainerID="20244cad104fedd5a1b6c253e3a236e073902ed78b3011430e560dbf25fa7082" Namespace="calico-system" Pod="calico-apiserver-747d4d9564-p57qj" WorkloadEndpoint="172--238--189--76-k8s-calico--apiserver--747d4d9564--p57qj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-calico--apiserver--747d4d9564--p57qj-eth0", GenerateName:"calico-apiserver-747d4d9564-", Namespace:"calico-system", SelfLink:"", UID:"a9d870f4-95e3-4941-9a77-e1b80afca9bd", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747d4d9564", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"", Pod:"calico-apiserver-747d4d9564-p57qj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibc751cbdbd2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:14.318397 containerd[1456]: 2026-04-17 23:40:14.280 [INFO][4448] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.68/32] ContainerID="20244cad104fedd5a1b6c253e3a236e073902ed78b3011430e560dbf25fa7082" Namespace="calico-system" Pod="calico-apiserver-747d4d9564-p57qj" WorkloadEndpoint="172--238--189--76-k8s-calico--apiserver--747d4d9564--p57qj-eth0" Apr 17 23:40:14.318397 containerd[1456]: 2026-04-17 23:40:14.280 [INFO][4448] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc751cbdbd2 ContainerID="20244cad104fedd5a1b6c253e3a236e073902ed78b3011430e560dbf25fa7082" Namespace="calico-system" Pod="calico-apiserver-747d4d9564-p57qj" WorkloadEndpoint="172--238--189--76-k8s-calico--apiserver--747d4d9564--p57qj-eth0" Apr 17 23:40:14.318397 containerd[1456]: 2026-04-17 23:40:14.290 [INFO][4448] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20244cad104fedd5a1b6c253e3a236e073902ed78b3011430e560dbf25fa7082" Namespace="calico-system" Pod="calico-apiserver-747d4d9564-p57qj" WorkloadEndpoint="172--238--189--76-k8s-calico--apiserver--747d4d9564--p57qj-eth0" Apr 17 23:40:14.318397 containerd[1456]: 2026-04-17 23:40:14.291 [INFO][4448] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="20244cad104fedd5a1b6c253e3a236e073902ed78b3011430e560dbf25fa7082" Namespace="calico-system" Pod="calico-apiserver-747d4d9564-p57qj" WorkloadEndpoint="172--238--189--76-k8s-calico--apiserver--747d4d9564--p57qj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-calico--apiserver--747d4d9564--p57qj-eth0", GenerateName:"calico-apiserver-747d4d9564-", Namespace:"calico-system", SelfLink:"", UID:"a9d870f4-95e3-4941-9a77-e1b80afca9bd", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747d4d9564", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"20244cad104fedd5a1b6c253e3a236e073902ed78b3011430e560dbf25fa7082", Pod:"calico-apiserver-747d4d9564-p57qj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibc751cbdbd2", MAC:"e2:21:4e:83:7e:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:14.318397 containerd[1456]: 2026-04-17 23:40:14.306 [INFO][4448] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="20244cad104fedd5a1b6c253e3a236e073902ed78b3011430e560dbf25fa7082" Namespace="calico-system" Pod="calico-apiserver-747d4d9564-p57qj" WorkloadEndpoint="172--238--189--76-k8s-calico--apiserver--747d4d9564--p57qj-eth0" Apr 17 23:40:14.360949 containerd[1456]: time="2026-04-17T23:40:14.360594766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:40:14.360949 containerd[1456]: time="2026-04-17T23:40:14.360646585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:40:14.360949 containerd[1456]: time="2026-04-17T23:40:14.360689215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:14.360949 containerd[1456]: time="2026-04-17T23:40:14.360841115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:14.406369 systemd[1]: Started cri-containerd-20244cad104fedd5a1b6c253e3a236e073902ed78b3011430e560dbf25fa7082.scope - libcontainer container 20244cad104fedd5a1b6c253e3a236e073902ed78b3011430e560dbf25fa7082. Apr 17 23:40:14.431260 containerd[1456]: time="2026-04-17T23:40:14.431220310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d5d6df97d-qjfqb,Uid:b465cf39-1a7a-43c8-8b20-c06a445d067b,Namespace:calico-system,Attempt:1,} returns sandbox id \"2ca52704a28ec80e90353ecffeabb9fe0b1393210ed4c0f1dd4fee73f627e494\"" Apr 17 23:40:14.527310 containerd[1456]: time="2026-04-17T23:40:14.527277562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747d4d9564-p57qj,Uid:a9d870f4-95e3-4941-9a77-e1b80afca9bd,Namespace:calico-system,Attempt:1,} returns sandbox id \"20244cad104fedd5a1b6c253e3a236e073902ed78b3011430e560dbf25fa7082\"" Apr 17 23:40:14.540829 systemd-networkd[1376]: cali0c7c3c5d267: Gained IPv6LL Apr 17 23:40:14.766564 containerd[1456]: time="2026-04-17T23:40:14.766158281Z" level=info msg="StopPodSandbox for \"6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f\"" Apr 17 23:40:14.767672 containerd[1456]: time="2026-04-17T23:40:14.767458910Z" level=info msg="StopPodSandbox for \"6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416\"" Apr 17 23:40:14.899803 containerd[1456]: 2026-04-17 23:40:14.831 [INFO][4611] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" Apr 17 23:40:14.899803 containerd[1456]: 2026-04-17 23:40:14.834 [INFO][4611] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" iface="eth0" netns="/var/run/netns/cni-6c380b4d-d1f4-c22d-8bad-497afc4a163f" Apr 17 23:40:14.899803 containerd[1456]: 2026-04-17 23:40:14.835 [INFO][4611] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" iface="eth0" netns="/var/run/netns/cni-6c380b4d-d1f4-c22d-8bad-497afc4a163f" Apr 17 23:40:14.899803 containerd[1456]: 2026-04-17 23:40:14.835 [INFO][4611] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" iface="eth0" netns="/var/run/netns/cni-6c380b4d-d1f4-c22d-8bad-497afc4a163f" Apr 17 23:40:14.899803 containerd[1456]: 2026-04-17 23:40:14.835 [INFO][4611] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" Apr 17 23:40:14.899803 containerd[1456]: 2026-04-17 23:40:14.835 [INFO][4611] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" Apr 17 23:40:14.899803 containerd[1456]: 2026-04-17 23:40:14.880 [INFO][4628] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" HandleID="k8s-pod-network.6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" Workload="172--238--189--76-k8s-calico--apiserver--747d4d9564--87p66-eth0" Apr 17 23:40:14.899803 containerd[1456]: 2026-04-17 23:40:14.880 [INFO][4628] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:14.899803 containerd[1456]: 2026-04-17 23:40:14.880 [INFO][4628] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:14.899803 containerd[1456]: 2026-04-17 23:40:14.889 [WARNING][4628] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" HandleID="k8s-pod-network.6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" Workload="172--238--189--76-k8s-calico--apiserver--747d4d9564--87p66-eth0" Apr 17 23:40:14.899803 containerd[1456]: 2026-04-17 23:40:14.889 [INFO][4628] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" HandleID="k8s-pod-network.6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" Workload="172--238--189--76-k8s-calico--apiserver--747d4d9564--87p66-eth0" Apr 17 23:40:14.899803 containerd[1456]: 2026-04-17 23:40:14.891 [INFO][4628] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:14.899803 containerd[1456]: 2026-04-17 23:40:14.894 [INFO][4611] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" Apr 17 23:40:14.901190 containerd[1456]: time="2026-04-17T23:40:14.900805623Z" level=info msg="TearDown network for sandbox \"6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f\" successfully" Apr 17 23:40:14.901190 containerd[1456]: time="2026-04-17T23:40:14.900993883Z" level=info msg="StopPodSandbox for \"6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f\" returns successfully" Apr 17 23:40:14.904244 containerd[1456]: time="2026-04-17T23:40:14.904216941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747d4d9564-87p66,Uid:fe4b5225-389a-4c5f-90d9-d343b520891b,Namespace:calico-system,Attempt:1,}" Apr 17 23:40:14.942477 containerd[1456]: 2026-04-17 23:40:14.863 [INFO][4607] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" Apr 17 23:40:14.942477 containerd[1456]: 2026-04-17 23:40:14.863 [INFO][4607] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" iface="eth0" netns="/var/run/netns/cni-6a260e1d-fa54-6e77-a644-6c1d3f612190" Apr 17 23:40:14.942477 containerd[1456]: 2026-04-17 23:40:14.865 [INFO][4607] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" iface="eth0" netns="/var/run/netns/cni-6a260e1d-fa54-6e77-a644-6c1d3f612190" Apr 17 23:40:14.942477 containerd[1456]: 2026-04-17 23:40:14.865 [INFO][4607] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" iface="eth0" netns="/var/run/netns/cni-6a260e1d-fa54-6e77-a644-6c1d3f612190" Apr 17 23:40:14.942477 containerd[1456]: 2026-04-17 23:40:14.865 [INFO][4607] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" Apr 17 23:40:14.942477 containerd[1456]: 2026-04-17 23:40:14.865 [INFO][4607] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" Apr 17 23:40:14.942477 containerd[1456]: 2026-04-17 23:40:14.913 [INFO][4634] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" HandleID="k8s-pod-network.6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" Workload="172--238--189--76-k8s-coredns--66bc5c9577--lh2x8-eth0" Apr 17 23:40:14.942477 containerd[1456]: 2026-04-17 23:40:14.914 [INFO][4634] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:14.942477 containerd[1456]: 2026-04-17 23:40:14.914 [INFO][4634] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:14.942477 containerd[1456]: 2026-04-17 23:40:14.930 [WARNING][4634] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" HandleID="k8s-pod-network.6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" Workload="172--238--189--76-k8s-coredns--66bc5c9577--lh2x8-eth0" Apr 17 23:40:14.942477 containerd[1456]: 2026-04-17 23:40:14.931 [INFO][4634] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" HandleID="k8s-pod-network.6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" Workload="172--238--189--76-k8s-coredns--66bc5c9577--lh2x8-eth0" Apr 17 23:40:14.942477 containerd[1456]: 2026-04-17 23:40:14.932 [INFO][4634] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:14.942477 containerd[1456]: 2026-04-17 23:40:14.937 [INFO][4607] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" Apr 17 23:40:14.943172 containerd[1456]: time="2026-04-17T23:40:14.942832192Z" level=info msg="TearDown network for sandbox \"6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416\" successfully" Apr 17 23:40:14.949721 containerd[1456]: time="2026-04-17T23:40:14.942928002Z" level=info msg="StopPodSandbox for \"6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416\" returns successfully" Apr 17 23:40:14.955727 kubelet[2540]: E0417 23:40:14.954317 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:40:14.956218 containerd[1456]: time="2026-04-17T23:40:14.955600986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lh2x8,Uid:98a1a605-7c5c-4f5c-801d-322ef1144e09,Namespace:kube-system,Attempt:1,}" Apr 17 23:40:14.960215 systemd[1]: run-netns-cni\x2d6c380b4d\x2dd1f4\x2dc22d\x2d8bad\x2d497afc4a163f.mount: Deactivated successfully. Apr 17 23:40:14.960316 systemd[1]: run-netns-cni\x2d6a260e1d\x2dfa54\x2d6e77\x2da644\x2d6c1d3f612190.mount: Deactivated successfully. Apr 17 23:40:15.101824 systemd-networkd[1376]: calibeb2ced4cd1: Link UP Apr 17 23:40:15.102901 systemd-networkd[1376]: calibeb2ced4cd1: Gained carrier Apr 17 23:40:15.121770 containerd[1456]: 2026-04-17 23:40:14.966 [ERROR][4642] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:40:15.121770 containerd[1456]: 2026-04-17 23:40:14.989 [INFO][4642] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--189--76-k8s-calico--apiserver--747d4d9564--87p66-eth0 calico-apiserver-747d4d9564- calico-system fe4b5225-389a-4c5f-90d9-d343b520891b 949 0 2026-04-17 23:39:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:747d4d9564 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-238-189-76 calico-apiserver-747d4d9564-87p66 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calibeb2ced4cd1 [] [] }} ContainerID="a1cd4d089791bde480b9dead5c623707de7b97ec6125452b0dc122d51046ded2" Namespace="calico-system" Pod="calico-apiserver-747d4d9564-87p66" WorkloadEndpoint="172--238--189--76-k8s-calico--apiserver--747d4d9564--87p66-" Apr 17 23:40:15.121770 containerd[1456]: 2026-04-17 23:40:14.989 [INFO][4642] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a1cd4d089791bde480b9dead5c623707de7b97ec6125452b0dc122d51046ded2" Namespace="calico-system" Pod="calico-apiserver-747d4d9564-87p66" WorkloadEndpoint="172--238--189--76-k8s-calico--apiserver--747d4d9564--87p66-eth0" Apr 17 23:40:15.121770 containerd[1456]: 2026-04-17 23:40:15.035 [INFO][4663] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1cd4d089791bde480b9dead5c623707de7b97ec6125452b0dc122d51046ded2" HandleID="k8s-pod-network.a1cd4d089791bde480b9dead5c623707de7b97ec6125452b0dc122d51046ded2" Workload="172--238--189--76-k8s-calico--apiserver--747d4d9564--87p66-eth0" Apr 17 23:40:15.121770 containerd[1456]: 2026-04-17 23:40:15.051 [INFO][4663] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a1cd4d089791bde480b9dead5c623707de7b97ec6125452b0dc122d51046ded2" HandleID="k8s-pod-network.a1cd4d089791bde480b9dead5c623707de7b97ec6125452b0dc122d51046ded2" Workload="172--238--189--76-k8s-calico--apiserver--747d4d9564--87p66-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdee0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-189-76", "pod":"calico-apiserver-747d4d9564-87p66", "timestamp":"2026-04-17 23:40:15.035338326 +0000 UTC"}, Hostname:"172-238-189-76", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001886e0)} Apr 17 23:40:15.121770 containerd[1456]: 2026-04-17 23:40:15.051 [INFO][4663] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:15.121770 containerd[1456]: 2026-04-17 23:40:15.051 [INFO][4663] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:15.121770 containerd[1456]: 2026-04-17 23:40:15.051 [INFO][4663] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-189-76' Apr 17 23:40:15.121770 containerd[1456]: 2026-04-17 23:40:15.055 [INFO][4663] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a1cd4d089791bde480b9dead5c623707de7b97ec6125452b0dc122d51046ded2" host="172-238-189-76" Apr 17 23:40:15.121770 containerd[1456]: 2026-04-17 23:40:15.062 [INFO][4663] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-189-76" Apr 17 23:40:15.121770 containerd[1456]: 2026-04-17 23:40:15.068 [INFO][4663] ipam/ipam.go 526: Trying affinity for 192.168.75.64/26 host="172-238-189-76" Apr 17 23:40:15.121770 containerd[1456]: 2026-04-17 23:40:15.070 [INFO][4663] ipam/ipam.go 160: Attempting to load block cidr=192.168.75.64/26 host="172-238-189-76" Apr 17 23:40:15.121770 containerd[1456]: 2026-04-17 23:40:15.073 [INFO][4663] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.75.64/26 host="172-238-189-76" Apr 17 23:40:15.121770 containerd[1456]: 2026-04-17 23:40:15.073 [INFO][4663] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.75.64/26 handle="k8s-pod-network.a1cd4d089791bde480b9dead5c623707de7b97ec6125452b0dc122d51046ded2" host="172-238-189-76" Apr 17 23:40:15.121770 containerd[1456]: 2026-04-17 23:40:15.075 [INFO][4663] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a1cd4d089791bde480b9dead5c623707de7b97ec6125452b0dc122d51046ded2 Apr 17 23:40:15.121770 containerd[1456]: 2026-04-17 23:40:15.080 [INFO][4663] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.75.64/26 handle="k8s-pod-network.a1cd4d089791bde480b9dead5c623707de7b97ec6125452b0dc122d51046ded2" host="172-238-189-76" Apr 17 23:40:15.121770 containerd[1456]: 2026-04-17 23:40:15.086 [INFO][4663] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.75.69/26] block=192.168.75.64/26 handle="k8s-pod-network.a1cd4d089791bde480b9dead5c623707de7b97ec6125452b0dc122d51046ded2" host="172-238-189-76" Apr 17 23:40:15.121770 containerd[1456]: 2026-04-17 23:40:15.086 [INFO][4663] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.75.69/26] handle="k8s-pod-network.a1cd4d089791bde480b9dead5c623707de7b97ec6125452b0dc122d51046ded2" host="172-238-189-76" Apr 17 23:40:15.121770 containerd[1456]: 2026-04-17 23:40:15.086 [INFO][4663] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:15.121770 containerd[1456]: 2026-04-17 23:40:15.086 [INFO][4663] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.75.69/26] IPv6=[] ContainerID="a1cd4d089791bde480b9dead5c623707de7b97ec6125452b0dc122d51046ded2" HandleID="k8s-pod-network.a1cd4d089791bde480b9dead5c623707de7b97ec6125452b0dc122d51046ded2" Workload="172--238--189--76-k8s-calico--apiserver--747d4d9564--87p66-eth0" Apr 17 23:40:15.122615 containerd[1456]: 2026-04-17 23:40:15.092 [INFO][4642] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a1cd4d089791bde480b9dead5c623707de7b97ec6125452b0dc122d51046ded2" Namespace="calico-system" Pod="calico-apiserver-747d4d9564-87p66" WorkloadEndpoint="172--238--189--76-k8s-calico--apiserver--747d4d9564--87p66-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-calico--apiserver--747d4d9564--87p66-eth0", GenerateName:"calico-apiserver-747d4d9564-", Namespace:"calico-system", SelfLink:"", UID:"fe4b5225-389a-4c5f-90d9-d343b520891b", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747d4d9564", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"", Pod:"calico-apiserver-747d4d9564-87p66", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibeb2ced4cd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:15.122615 containerd[1456]: 2026-04-17 23:40:15.092 [INFO][4642] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.69/32] ContainerID="a1cd4d089791bde480b9dead5c623707de7b97ec6125452b0dc122d51046ded2" Namespace="calico-system" Pod="calico-apiserver-747d4d9564-87p66" WorkloadEndpoint="172--238--189--76-k8s-calico--apiserver--747d4d9564--87p66-eth0" Apr 17 23:40:15.122615 containerd[1456]: 2026-04-17 23:40:15.093 [INFO][4642] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibeb2ced4cd1 ContainerID="a1cd4d089791bde480b9dead5c623707de7b97ec6125452b0dc122d51046ded2" Namespace="calico-system" Pod="calico-apiserver-747d4d9564-87p66" WorkloadEndpoint="172--238--189--76-k8s-calico--apiserver--747d4d9564--87p66-eth0" Apr 17 23:40:15.122615 containerd[1456]: 2026-04-17 23:40:15.097 [INFO][4642] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1cd4d089791bde480b9dead5c623707de7b97ec6125452b0dc122d51046ded2" Namespace="calico-system" Pod="calico-apiserver-747d4d9564-87p66" WorkloadEndpoint="172--238--189--76-k8s-calico--apiserver--747d4d9564--87p66-eth0" Apr 17 23:40:15.122615 containerd[1456]: 2026-04-17 23:40:15.098 [INFO][4642] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a1cd4d089791bde480b9dead5c623707de7b97ec6125452b0dc122d51046ded2" Namespace="calico-system" Pod="calico-apiserver-747d4d9564-87p66" WorkloadEndpoint="172--238--189--76-k8s-calico--apiserver--747d4d9564--87p66-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-calico--apiserver--747d4d9564--87p66-eth0", GenerateName:"calico-apiserver-747d4d9564-", Namespace:"calico-system", SelfLink:"", UID:"fe4b5225-389a-4c5f-90d9-d343b520891b", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747d4d9564", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"a1cd4d089791bde480b9dead5c623707de7b97ec6125452b0dc122d51046ded2", Pod:"calico-apiserver-747d4d9564-87p66", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibeb2ced4cd1", MAC:"6a:5e:92:60:85:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:15.122615 containerd[1456]: 2026-04-17 23:40:15.111 [INFO][4642] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a1cd4d089791bde480b9dead5c623707de7b97ec6125452b0dc122d51046ded2" Namespace="calico-system" Pod="calico-apiserver-747d4d9564-87p66" WorkloadEndpoint="172--238--189--76-k8s-calico--apiserver--747d4d9564--87p66-eth0" Apr 17 23:40:15.155576 containerd[1456]: time="2026-04-17T23:40:15.155237300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:40:15.155576 containerd[1456]: time="2026-04-17T23:40:15.155319820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:40:15.155576 containerd[1456]: time="2026-04-17T23:40:15.155335190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:15.155576 containerd[1456]: time="2026-04-17T23:40:15.155423710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:15.197021 systemd[1]: Started cri-containerd-a1cd4d089791bde480b9dead5c623707de7b97ec6125452b0dc122d51046ded2.scope - libcontainer container a1cd4d089791bde480b9dead5c623707de7b97ec6125452b0dc122d51046ded2. Apr 17 23:40:15.204609 systemd-networkd[1376]: cali171c3f9e0ca: Link UP Apr 17 23:40:15.205476 systemd-networkd[1376]: cali171c3f9e0ca: Gained carrier Apr 17 23:40:15.228070 containerd[1456]: 2026-04-17 23:40:15.039 [ERROR][4652] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:40:15.228070 containerd[1456]: 2026-04-17 23:40:15.055 [INFO][4652] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--189--76-k8s-coredns--66bc5c9577--lh2x8-eth0 coredns-66bc5c9577- kube-system 98a1a605-7c5c-4f5c-801d-322ef1144e09 950 0 2026-04-17 23:39:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-238-189-76 coredns-66bc5c9577-lh2x8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali171c3f9e0ca [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="ed287fbb525078d5629b84757676a08856533305147479f8af0557bb00bb2558" Namespace="kube-system" Pod="coredns-66bc5c9577-lh2x8" WorkloadEndpoint="172--238--189--76-k8s-coredns--66bc5c9577--lh2x8-" Apr 17 23:40:15.228070 containerd[1456]: 2026-04-17 23:40:15.055 [INFO][4652] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed287fbb525078d5629b84757676a08856533305147479f8af0557bb00bb2558" Namespace="kube-system" Pod="coredns-66bc5c9577-lh2x8" WorkloadEndpoint="172--238--189--76-k8s-coredns--66bc5c9577--lh2x8-eth0" Apr 17 23:40:15.228070 containerd[1456]: 2026-04-17 23:40:15.108 [INFO][4672] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed287fbb525078d5629b84757676a08856533305147479f8af0557bb00bb2558" HandleID="k8s-pod-network.ed287fbb525078d5629b84757676a08856533305147479f8af0557bb00bb2558" Workload="172--238--189--76-k8s-coredns--66bc5c9577--lh2x8-eth0" Apr 17 23:40:15.228070 containerd[1456]: 2026-04-17 23:40:15.121 [INFO][4672] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ed287fbb525078d5629b84757676a08856533305147479f8af0557bb00bb2558" HandleID="k8s-pod-network.ed287fbb525078d5629b84757676a08856533305147479f8af0557bb00bb2558" Workload="172--238--189--76-k8s-coredns--66bc5c9577--lh2x8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e77b0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-238-189-76", "pod":"coredns-66bc5c9577-lh2x8", "timestamp":"2026-04-17 23:40:15.108121082 +0000 UTC"}, Hostname:"172-238-189-76", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00022d8c0)} Apr 17 23:40:15.228070 containerd[1456]: 2026-04-17 23:40:15.121 [INFO][4672] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:15.228070 containerd[1456]: 2026-04-17 23:40:15.121 [INFO][4672] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:15.228070 containerd[1456]: 2026-04-17 23:40:15.121 [INFO][4672] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-189-76' Apr 17 23:40:15.228070 containerd[1456]: 2026-04-17 23:40:15.154 [INFO][4672] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ed287fbb525078d5629b84757676a08856533305147479f8af0557bb00bb2558" host="172-238-189-76" Apr 17 23:40:15.228070 containerd[1456]: 2026-04-17 23:40:15.163 [INFO][4672] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-189-76" Apr 17 23:40:15.228070 containerd[1456]: 2026-04-17 23:40:15.169 [INFO][4672] ipam/ipam.go 526: Trying affinity for 192.168.75.64/26 host="172-238-189-76" Apr 17 23:40:15.228070 containerd[1456]: 2026-04-17 23:40:15.171 [INFO][4672] ipam/ipam.go 160: Attempting to load block cidr=192.168.75.64/26 host="172-238-189-76" Apr 17 23:40:15.228070 containerd[1456]: 2026-04-17 23:40:15.175 [INFO][4672] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.75.64/26 host="172-238-189-76" Apr 17 23:40:15.228070 containerd[1456]: 2026-04-17 23:40:15.175 [INFO][4672] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.75.64/26 handle="k8s-pod-network.ed287fbb525078d5629b84757676a08856533305147479f8af0557bb00bb2558" host="172-238-189-76" Apr 17 23:40:15.228070 containerd[1456]: 2026-04-17 23:40:15.177 [INFO][4672] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ed287fbb525078d5629b84757676a08856533305147479f8af0557bb00bb2558 Apr 17 23:40:15.228070 containerd[1456]: 2026-04-17 23:40:15.181 [INFO][4672] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.75.64/26 handle="k8s-pod-network.ed287fbb525078d5629b84757676a08856533305147479f8af0557bb00bb2558" host="172-238-189-76" Apr 17 23:40:15.228070 containerd[1456]: 2026-04-17 23:40:15.188 [INFO][4672] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.75.70/26] block=192.168.75.64/26 handle="k8s-pod-network.ed287fbb525078d5629b84757676a08856533305147479f8af0557bb00bb2558" host="172-238-189-76" Apr 17 23:40:15.228070 containerd[1456]: 2026-04-17 23:40:15.189 [INFO][4672] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.75.70/26] handle="k8s-pod-network.ed287fbb525078d5629b84757676a08856533305147479f8af0557bb00bb2558" host="172-238-189-76" Apr 17 23:40:15.228070 containerd[1456]: 2026-04-17 23:40:15.189 [INFO][4672] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:15.228070 containerd[1456]: 2026-04-17 23:40:15.189 [INFO][4672] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.75.70/26] IPv6=[] ContainerID="ed287fbb525078d5629b84757676a08856533305147479f8af0557bb00bb2558" HandleID="k8s-pod-network.ed287fbb525078d5629b84757676a08856533305147479f8af0557bb00bb2558" Workload="172--238--189--76-k8s-coredns--66bc5c9577--lh2x8-eth0" Apr 17 23:40:15.230549 containerd[1456]: 2026-04-17 23:40:15.198 [INFO][4652] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed287fbb525078d5629b84757676a08856533305147479f8af0557bb00bb2558" Namespace="kube-system" Pod="coredns-66bc5c9577-lh2x8" WorkloadEndpoint="172--238--189--76-k8s-coredns--66bc5c9577--lh2x8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-coredns--66bc5c9577--lh2x8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"98a1a605-7c5c-4f5c-801d-322ef1144e09", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"", Pod:"coredns-66bc5c9577-lh2x8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali171c3f9e0ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:15.230549 containerd[1456]: 2026-04-17 23:40:15.198 [INFO][4652] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.70/32] ContainerID="ed287fbb525078d5629b84757676a08856533305147479f8af0557bb00bb2558" Namespace="kube-system" Pod="coredns-66bc5c9577-lh2x8" WorkloadEndpoint="172--238--189--76-k8s-coredns--66bc5c9577--lh2x8-eth0" Apr 17 23:40:15.230549 containerd[1456]: 2026-04-17 23:40:15.198 [INFO][4652] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali171c3f9e0ca ContainerID="ed287fbb525078d5629b84757676a08856533305147479f8af0557bb00bb2558" Namespace="kube-system" Pod="coredns-66bc5c9577-lh2x8" WorkloadEndpoint="172--238--189--76-k8s-coredns--66bc5c9577--lh2x8-eth0" Apr 17 23:40:15.230549 containerd[1456]: 2026-04-17 23:40:15.206 [INFO][4652] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed287fbb525078d5629b84757676a08856533305147479f8af0557bb00bb2558" Namespace="kube-system" Pod="coredns-66bc5c9577-lh2x8" WorkloadEndpoint="172--238--189--76-k8s-coredns--66bc5c9577--lh2x8-eth0" Apr 17 23:40:15.230549 containerd[1456]: 2026-04-17 23:40:15.207 [INFO][4652] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed287fbb525078d5629b84757676a08856533305147479f8af0557bb00bb2558" Namespace="kube-system" Pod="coredns-66bc5c9577-lh2x8" WorkloadEndpoint="172--238--189--76-k8s-coredns--66bc5c9577--lh2x8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-coredns--66bc5c9577--lh2x8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"98a1a605-7c5c-4f5c-801d-322ef1144e09", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"ed287fbb525078d5629b84757676a08856533305147479f8af0557bb00bb2558", Pod:"coredns-66bc5c9577-lh2x8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali171c3f9e0ca", MAC:"de:df:39:56:c2:40", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:15.230549 containerd[1456]: 2026-04-17 23:40:15.222 [INFO][4652] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed287fbb525078d5629b84757676a08856533305147479f8af0557bb00bb2558" Namespace="kube-system" Pod="coredns-66bc5c9577-lh2x8" WorkloadEndpoint="172--238--189--76-k8s-coredns--66bc5c9577--lh2x8-eth0" Apr 17 23:40:15.270166 containerd[1456]: time="2026-04-17T23:40:15.270069805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:40:15.270363 containerd[1456]: time="2026-04-17T23:40:15.270291205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:40:15.270363 containerd[1456]: time="2026-04-17T23:40:15.270328675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:15.273545 containerd[1456]: time="2026-04-17T23:40:15.273479604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:15.301821 systemd[1]: Started cri-containerd-ed287fbb525078d5629b84757676a08856533305147479f8af0557bb00bb2558.scope - libcontainer container ed287fbb525078d5629b84757676a08856533305147479f8af0557bb00bb2558. Apr 17 23:40:15.311841 containerd[1456]: time="2026-04-17T23:40:15.311668716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747d4d9564-87p66,Uid:fe4b5225-389a-4c5f-90d9-d343b520891b,Namespace:calico-system,Attempt:1,} returns sandbox id \"a1cd4d089791bde480b9dead5c623707de7b97ec6125452b0dc122d51046ded2\"" Apr 17 23:40:15.361749 containerd[1456]: time="2026-04-17T23:40:15.361484482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lh2x8,Uid:98a1a605-7c5c-4f5c-801d-322ef1144e09,Namespace:kube-system,Attempt:1,} returns sandbox id \"ed287fbb525078d5629b84757676a08856533305147479f8af0557bb00bb2558\"" Apr 17 23:40:15.364892 kubelet[2540]: E0417 23:40:15.363460 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:40:15.368649 containerd[1456]: time="2026-04-17T23:40:15.368481699Z" level=info msg="CreateContainer within sandbox \"ed287fbb525078d5629b84757676a08856533305147479f8af0557bb00bb2558\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:40:15.394091 containerd[1456]: time="2026-04-17T23:40:15.394051097Z" level=info msg="CreateContainer within sandbox \"ed287fbb525078d5629b84757676a08856533305147479f8af0557bb00bb2558\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"388d6949729f092762875ffc3ff4306750a13294b69345ca2eb8cae9834cca40\"" Apr 17 23:40:15.395711 containerd[1456]: time="2026-04-17T23:40:15.395643846Z" level=info msg="StartContainer for \"388d6949729f092762875ffc3ff4306750a13294b69345ca2eb8cae9834cca40\"" Apr 17 23:40:15.434023 systemd[1]: Started cri-containerd-388d6949729f092762875ffc3ff4306750a13294b69345ca2eb8cae9834cca40.scope - libcontainer container 388d6949729f092762875ffc3ff4306750a13294b69345ca2eb8cae9834cca40. Apr 17 23:40:15.471759 containerd[1456]: time="2026-04-17T23:40:15.470431821Z" level=info msg="StartContainer for \"388d6949729f092762875ffc3ff4306750a13294b69345ca2eb8cae9834cca40\" returns successfully" Apr 17 23:40:15.628938 systemd-networkd[1376]: cali27081aead17: Gained IPv6LL Apr 17 23:40:15.747738 containerd[1456]: time="2026-04-17T23:40:15.747605580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:15.747738 containerd[1456]: time="2026-04-17T23:40:15.747659770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 17 23:40:15.750385 containerd[1456]: time="2026-04-17T23:40:15.749730759Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:15.751346 containerd[1456]: time="2026-04-17T23:40:15.751301118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:15.752155 containerd[1456]: time="2026-04-17T23:40:15.751913188Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.66343065s" Apr 17 23:40:15.752155 containerd[1456]: time="2026-04-17T23:40:15.751950508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 17 23:40:15.754588 containerd[1456]: time="2026-04-17T23:40:15.754556086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 17 23:40:15.758187 containerd[1456]: time="2026-04-17T23:40:15.758162825Z" level=info msg="CreateContainer within sandbox \"efe2704fe93c282c1bccd84fe85b5d96f217aa74848b1aec1c279edb28f08978\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 17 23:40:15.783903 containerd[1456]: time="2026-04-17T23:40:15.783829202Z" level=info msg="CreateContainer within sandbox \"efe2704fe93c282c1bccd84fe85b5d96f217aa74848b1aec1c279edb28f08978\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b65cd615d5e87ffaf2f3f40bcebc6a17df8395ab0476c647f46bcfe51d173ff1\"" Apr 17 23:40:15.785653 containerd[1456]: time="2026-04-17T23:40:15.784903062Z" level=info msg="StartContainer for \"b65cd615d5e87ffaf2f3f40bcebc6a17df8395ab0476c647f46bcfe51d173ff1\"" Apr 17 23:40:15.826854 systemd[1]: Started cri-containerd-b65cd615d5e87ffaf2f3f40bcebc6a17df8395ab0476c647f46bcfe51d173ff1.scope - libcontainer container b65cd615d5e87ffaf2f3f40bcebc6a17df8395ab0476c647f46bcfe51d173ff1. Apr 17 23:40:15.853639 kubelet[2540]: I0417 23:40:15.853282 2540 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:40:15.855028 kubelet[2540]: E0417 23:40:15.854136 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:40:15.892854 containerd[1456]: time="2026-04-17T23:40:15.891565902Z" level=info msg="StartContainer for \"b65cd615d5e87ffaf2f3f40bcebc6a17df8395ab0476c647f46bcfe51d173ff1\" returns successfully" Apr 17 23:40:16.019240 kubelet[2540]: E0417 23:40:16.018400 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:40:16.023242 kubelet[2540]: E0417 23:40:16.022746 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:40:16.066276 kubelet[2540]: I0417 23:40:16.065151 2540 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-hxd5z" podStartSLOduration=23.399694202 podStartE2EDuration="26.065137271s" podCreationTimestamp="2026-04-17 23:39:50 +0000 UTC" firstStartedPulling="2026-04-17 23:40:13.087483308 +0000 UTC m=+36.421380675" lastFinishedPulling="2026-04-17 23:40:15.752926377 +0000 UTC m=+39.086823744" observedRunningTime="2026-04-17 23:40:16.035654774 +0000 UTC m=+39.369552141" watchObservedRunningTime="2026-04-17 23:40:16.065137271 +0000 UTC m=+39.399034638" Apr 17 23:40:16.135054 kubelet[2540]: I0417 23:40:16.135003 2540 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lh2x8" podStartSLOduration=34.13499016 podStartE2EDuration="34.13499016s" podCreationTimestamp="2026-04-17 23:39:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:40:16.065510661 +0000 UTC m=+39.399408038" watchObservedRunningTime="2026-04-17 23:40:16.13499016 +0000 UTC m=+39.468887527" Apr 17 23:40:16.205895 systemd-networkd[1376]: calibc751cbdbd2: Gained IPv6LL Apr 17 23:40:16.720810 systemd-networkd[1376]: calibeb2ced4cd1: Gained IPv6LL Apr 17 23:40:16.769590 containerd[1456]: time="2026-04-17T23:40:16.769071959Z" level=info msg="StopPodSandbox for \"251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029\"" Apr 17 23:40:16.771272 containerd[1456]: time="2026-04-17T23:40:16.771045659Z" level=info msg="StopPodSandbox for \"3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6\"" Apr 17 23:40:16.957758 containerd[1456]: 2026-04-17 23:40:16.860 [INFO][4956] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" Apr 17 23:40:16.957758 containerd[1456]: 2026-04-17 23:40:16.861 [INFO][4956] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" iface="eth0" netns="/var/run/netns/cni-913fd43b-3ed1-100f-c58d-c1084c40f3f6" Apr 17 23:40:16.957758 containerd[1456]: 2026-04-17 23:40:16.861 [INFO][4956] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" iface="eth0" netns="/var/run/netns/cni-913fd43b-3ed1-100f-c58d-c1084c40f3f6" Apr 17 23:40:16.957758 containerd[1456]: 2026-04-17 23:40:16.863 [INFO][4956] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" iface="eth0" netns="/var/run/netns/cni-913fd43b-3ed1-100f-c58d-c1084c40f3f6" Apr 17 23:40:16.957758 containerd[1456]: 2026-04-17 23:40:16.863 [INFO][4956] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" Apr 17 23:40:16.957758 containerd[1456]: 2026-04-17 23:40:16.863 [INFO][4956] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" Apr 17 23:40:16.957758 containerd[1456]: 2026-04-17 23:40:16.915 [INFO][4970] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" HandleID="k8s-pod-network.251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" Workload="172--238--189--76-k8s-csi--node--driver--9m4mr-eth0" Apr 17 23:40:16.957758 containerd[1456]: 2026-04-17 23:40:16.915 [INFO][4970] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:16.957758 containerd[1456]: 2026-04-17 23:40:16.915 [INFO][4970] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:16.957758 containerd[1456]: 2026-04-17 23:40:16.932 [WARNING][4970] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" HandleID="k8s-pod-network.251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" Workload="172--238--189--76-k8s-csi--node--driver--9m4mr-eth0" Apr 17 23:40:16.957758 containerd[1456]: 2026-04-17 23:40:16.932 [INFO][4970] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" HandleID="k8s-pod-network.251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" Workload="172--238--189--76-k8s-csi--node--driver--9m4mr-eth0" Apr 17 23:40:16.957758 containerd[1456]: 2026-04-17 23:40:16.935 [INFO][4970] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:16.957758 containerd[1456]: 2026-04-17 23:40:16.946 [INFO][4956] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" Apr 17 23:40:16.961118 systemd[1]: run-netns-cni\x2d913fd43b\x2d3ed1\x2d100f\x2dc58d\x2dc1084c40f3f6.mount: Deactivated successfully. Apr 17 23:40:16.963091 containerd[1456]: time="2026-04-17T23:40:16.963050943Z" level=info msg="TearDown network for sandbox \"251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029\" successfully" Apr 17 23:40:16.963168 containerd[1456]: time="2026-04-17T23:40:16.963152183Z" level=info msg="StopPodSandbox for \"251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029\" returns successfully" Apr 17 23:40:16.966736 containerd[1456]: time="2026-04-17T23:40:16.966664682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9m4mr,Uid:7519db54-398f-4489-8839-90013af059d5,Namespace:calico-system,Attempt:1,}" Apr 17 23:40:17.021823 kubelet[2540]: E0417 23:40:17.021719 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:40:17.059961 containerd[1456]: 2026-04-17 23:40:16.899 [INFO][4957] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" Apr 17 23:40:17.059961 containerd[1456]: 2026-04-17 23:40:16.899 [INFO][4957] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" iface="eth0" netns="/var/run/netns/cni-ad88a573-983e-8f57-e607-ad551825ced3" Apr 17 23:40:17.059961 containerd[1456]: 2026-04-17 23:40:16.903 [INFO][4957] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" iface="eth0" netns="/var/run/netns/cni-ad88a573-983e-8f57-e607-ad551825ced3" Apr 17 23:40:17.059961 containerd[1456]: 2026-04-17 23:40:16.904 [INFO][4957] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" iface="eth0" netns="/var/run/netns/cni-ad88a573-983e-8f57-e607-ad551825ced3" Apr 17 23:40:17.059961 containerd[1456]: 2026-04-17 23:40:16.904 [INFO][4957] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" Apr 17 23:40:17.059961 containerd[1456]: 2026-04-17 23:40:16.904 [INFO][4957] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" Apr 17 23:40:17.059961 containerd[1456]: 2026-04-17 23:40:17.025 [INFO][4982] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" HandleID="k8s-pod-network.3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" Workload="172--238--189--76-k8s-coredns--66bc5c9577--2kcx5-eth0" Apr 17 23:40:17.059961 containerd[1456]: 2026-04-17 23:40:17.025 [INFO][4982] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:17.059961 containerd[1456]: 2026-04-17 23:40:17.025 [INFO][4982] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:17.059961 containerd[1456]: 2026-04-17 23:40:17.032 [WARNING][4982] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" HandleID="k8s-pod-network.3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" Workload="172--238--189--76-k8s-coredns--66bc5c9577--2kcx5-eth0" Apr 17 23:40:17.059961 containerd[1456]: 2026-04-17 23:40:17.032 [INFO][4982] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" HandleID="k8s-pod-network.3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" Workload="172--238--189--76-k8s-coredns--66bc5c9577--2kcx5-eth0" Apr 17 23:40:17.059961 containerd[1456]: 2026-04-17 23:40:17.034 [INFO][4982] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:17.059961 containerd[1456]: 2026-04-17 23:40:17.047 [INFO][4957] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" Apr 17 23:40:17.063064 containerd[1456]: time="2026-04-17T23:40:17.062780641Z" level=info msg="TearDown network for sandbox \"3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6\" successfully" Apr 17 23:40:17.063064 containerd[1456]: time="2026-04-17T23:40:17.062805721Z" level=info msg="StopPodSandbox for \"3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6\" returns successfully" Apr 17 23:40:17.065836 kubelet[2540]: E0417 23:40:17.065798 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:40:17.066453 systemd[1]: run-netns-cni\x2dad88a573\x2d983e\x2d8f57\x2de607\x2dad551825ced3.mount: Deactivated successfully. Apr 17 23:40:17.069200 containerd[1456]: time="2026-04-17T23:40:17.069111628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2kcx5,Uid:3b2079f5-4ea9-4796-8181-3d79d0da7db2,Namespace:kube-system,Attempt:1,}" Apr 17 23:40:17.167006 systemd-networkd[1376]: cali171c3f9e0ca: Gained IPv6LL Apr 17 23:40:17.190720 kernel: calico-node[4994]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 17 23:40:17.208649 systemd-networkd[1376]: califfc2c550906: Link UP Apr 17 23:40:17.210491 systemd-networkd[1376]: califfc2c550906: Gained carrier Apr 17 23:40:17.255062 containerd[1456]: 2026-04-17 23:40:17.106 [INFO][4992] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--189--76-k8s-csi--node--driver--9m4mr-eth0 csi-node-driver- calico-system 7519db54-398f-4489-8839-90013af059d5 994 0 2026-04-17 23:39:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-238-189-76 csi-node-driver-9m4mr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califfc2c550906 [] [] }} ContainerID="9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8" Namespace="calico-system" Pod="csi-node-driver-9m4mr" WorkloadEndpoint="172--238--189--76-k8s-csi--node--driver--9m4mr-" Apr 17 23:40:17.255062 containerd[1456]: 2026-04-17 23:40:17.106 [INFO][4992] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8" Namespace="calico-system" Pod="csi-node-driver-9m4mr" WorkloadEndpoint="172--238--189--76-k8s-csi--node--driver--9m4mr-eth0" Apr 17 23:40:17.255062 containerd[1456]: 2026-04-17 23:40:17.140 [INFO][5025] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8" HandleID="k8s-pod-network.9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8" Workload="172--238--189--76-k8s-csi--node--driver--9m4mr-eth0" Apr 17 23:40:17.255062 containerd[1456]: 2026-04-17 23:40:17.147 [INFO][5025] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8" HandleID="k8s-pod-network.9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8" Workload="172--238--189--76-k8s-csi--node--driver--9m4mr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fd4c0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-189-76", "pod":"csi-node-driver-9m4mr", "timestamp":"2026-04-17 23:40:17.140446269 +0000 UTC"}, Hostname:"172-238-189-76", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003bef20)} Apr 17 23:40:17.255062 containerd[1456]: 2026-04-17 23:40:17.147 [INFO][5025] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:17.255062 containerd[1456]: 2026-04-17 23:40:17.147 [INFO][5025] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:17.255062 containerd[1456]: 2026-04-17 23:40:17.147 [INFO][5025] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-189-76' Apr 17 23:40:17.255062 containerd[1456]: 2026-04-17 23:40:17.151 [INFO][5025] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8" host="172-238-189-76" Apr 17 23:40:17.255062 containerd[1456]: 2026-04-17 23:40:17.156 [INFO][5025] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-189-76" Apr 17 23:40:17.255062 containerd[1456]: 2026-04-17 23:40:17.168 [INFO][5025] ipam/ipam.go 526: Trying affinity for 192.168.75.64/26 host="172-238-189-76" Apr 17 23:40:17.255062 containerd[1456]: 2026-04-17 23:40:17.169 [INFO][5025] ipam/ipam.go 160: Attempting to load block cidr=192.168.75.64/26 host="172-238-189-76" Apr 17 23:40:17.255062 containerd[1456]: 2026-04-17 23:40:17.172 [INFO][5025] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.75.64/26 host="172-238-189-76" Apr 17 23:40:17.255062 containerd[1456]: 2026-04-17 23:40:17.172 [INFO][5025] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.75.64/26 handle="k8s-pod-network.9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8" host="172-238-189-76" Apr 17 23:40:17.255062 containerd[1456]: 2026-04-17 23:40:17.179 [INFO][5025] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8 Apr 17 23:40:17.255062 containerd[1456]: 2026-04-17 23:40:17.187 [INFO][5025] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.75.64/26 handle="k8s-pod-network.9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8" host="172-238-189-76" Apr 17 23:40:17.255062 containerd[1456]: 2026-04-17 23:40:17.193 [INFO][5025] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.75.71/26] block=192.168.75.64/26 handle="k8s-pod-network.9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8" host="172-238-189-76" Apr 17 23:40:17.255062 containerd[1456]: 2026-04-17 23:40:17.193 [INFO][5025] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.75.71/26] handle="k8s-pod-network.9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8" host="172-238-189-76" Apr 17 23:40:17.255062 containerd[1456]: 2026-04-17 23:40:17.193 [INFO][5025] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:17.255062 containerd[1456]: 2026-04-17 23:40:17.193 [INFO][5025] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.75.71/26] IPv6=[] ContainerID="9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8" HandleID="k8s-pod-network.9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8" Workload="172--238--189--76-k8s-csi--node--driver--9m4mr-eth0" Apr 17 23:40:17.255717 containerd[1456]: 2026-04-17 23:40:17.201 [INFO][4992] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8" Namespace="calico-system" Pod="csi-node-driver-9m4mr" WorkloadEndpoint="172--238--189--76-k8s-csi--node--driver--9m4mr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-csi--node--driver--9m4mr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7519db54-398f-4489-8839-90013af059d5", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"", Pod:"csi-node-driver-9m4mr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califfc2c550906", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:17.255717 containerd[1456]: 2026-04-17 23:40:17.202 [INFO][4992] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.71/32] ContainerID="9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8" Namespace="calico-system" Pod="csi-node-driver-9m4mr" WorkloadEndpoint="172--238--189--76-k8s-csi--node--driver--9m4mr-eth0" Apr 17 23:40:17.255717 containerd[1456]: 2026-04-17 23:40:17.202 [INFO][4992] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califfc2c550906 ContainerID="9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8" Namespace="calico-system" Pod="csi-node-driver-9m4mr" WorkloadEndpoint="172--238--189--76-k8s-csi--node--driver--9m4mr-eth0" Apr 17 23:40:17.255717 containerd[1456]: 2026-04-17 23:40:17.211 [INFO][4992] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8" Namespace="calico-system" Pod="csi-node-driver-9m4mr" WorkloadEndpoint="172--238--189--76-k8s-csi--node--driver--9m4mr-eth0" Apr 17 23:40:17.255717 containerd[1456]: 2026-04-17 23:40:17.212 [INFO][4992] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8" Namespace="calico-system" Pod="csi-node-driver-9m4mr" WorkloadEndpoint="172--238--189--76-k8s-csi--node--driver--9m4mr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-csi--node--driver--9m4mr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7519db54-398f-4489-8839-90013af059d5", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8", Pod:"csi-node-driver-9m4mr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califfc2c550906", MAC:"0a:02:4b:48:f9:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:17.255717 containerd[1456]: 2026-04-17 23:40:17.240 [INFO][4992] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8" Namespace="calico-system" Pod="csi-node-driver-9m4mr" WorkloadEndpoint="172--238--189--76-k8s-csi--node--driver--9m4mr-eth0" Apr 17 23:40:17.370468 containerd[1456]: time="2026-04-17T23:40:17.370366123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:40:17.370588 containerd[1456]: time="2026-04-17T23:40:17.370493153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:40:17.370588 containerd[1456]: time="2026-04-17T23:40:17.370525023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:17.372007 containerd[1456]: time="2026-04-17T23:40:17.371915712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:17.417768 systemd-networkd[1376]: calid1358daf144: Link UP Apr 17 23:40:17.420520 systemd-networkd[1376]: calid1358daf144: Gained carrier Apr 17 23:40:17.420854 systemd[1]: Started cri-containerd-9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8.scope - libcontainer container 9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8. Apr 17 23:40:17.454660 containerd[1456]: 2026-04-17 23:40:17.212 [INFO][5009] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--189--76-k8s-coredns--66bc5c9577--2kcx5-eth0 coredns-66bc5c9577- kube-system 3b2079f5-4ea9-4796-8181-3d79d0da7db2 995 0 2026-04-17 23:39:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-238-189-76 coredns-66bc5c9577-2kcx5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid1358daf144 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="3ec1dbb8c66e0bcc875a96e882e365c51dca4555c39237c1581eeb76ce66eeb2" Namespace="kube-system" Pod="coredns-66bc5c9577-2kcx5" WorkloadEndpoint="172--238--189--76-k8s-coredns--66bc5c9577--2kcx5-" Apr 17 23:40:17.454660 containerd[1456]: 2026-04-17 23:40:17.212 [INFO][5009] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ec1dbb8c66e0bcc875a96e882e365c51dca4555c39237c1581eeb76ce66eeb2" Namespace="kube-system" Pod="coredns-66bc5c9577-2kcx5" WorkloadEndpoint="172--238--189--76-k8s-coredns--66bc5c9577--2kcx5-eth0" Apr 17 23:40:17.454660 containerd[1456]: 2026-04-17 23:40:17.327 [INFO][5044] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ec1dbb8c66e0bcc875a96e882e365c51dca4555c39237c1581eeb76ce66eeb2" HandleID="k8s-pod-network.3ec1dbb8c66e0bcc875a96e882e365c51dca4555c39237c1581eeb76ce66eeb2" Workload="172--238--189--76-k8s-coredns--66bc5c9577--2kcx5-eth0" Apr 17 23:40:17.454660 containerd[1456]: 2026-04-17 23:40:17.351 [INFO][5044] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3ec1dbb8c66e0bcc875a96e882e365c51dca4555c39237c1581eeb76ce66eeb2" HandleID="k8s-pod-network.3ec1dbb8c66e0bcc875a96e882e365c51dca4555c39237c1581eeb76ce66eeb2" Workload="172--238--189--76-k8s-coredns--66bc5c9577--2kcx5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004ac000), Attrs:map[string]string{"namespace":"kube-system", "node":"172-238-189-76", "pod":"coredns-66bc5c9577-2kcx5", "timestamp":"2026-04-17 23:40:17.327128031 +0000 UTC"}, Hostname:"172-238-189-76", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001942c0)} Apr 17 23:40:17.454660 containerd[1456]: 2026-04-17 23:40:17.354 [INFO][5044] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:17.454660 containerd[1456]: 2026-04-17 23:40:17.354 [INFO][5044] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:17.454660 containerd[1456]: 2026-04-17 23:40:17.354 [INFO][5044] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-189-76' Apr 17 23:40:17.454660 containerd[1456]: 2026-04-17 23:40:17.362 [INFO][5044] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3ec1dbb8c66e0bcc875a96e882e365c51dca4555c39237c1581eeb76ce66eeb2" host="172-238-189-76" Apr 17 23:40:17.454660 containerd[1456]: 2026-04-17 23:40:17.368 [INFO][5044] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-189-76" Apr 17 23:40:17.454660 containerd[1456]: 2026-04-17 23:40:17.380 [INFO][5044] ipam/ipam.go 526: Trying affinity for 192.168.75.64/26 host="172-238-189-76" Apr 17 23:40:17.454660 containerd[1456]: 2026-04-17 23:40:17.382 [INFO][5044] ipam/ipam.go 160: Attempting to load block cidr=192.168.75.64/26 host="172-238-189-76" Apr 17 23:40:17.454660 containerd[1456]: 2026-04-17 23:40:17.385 [INFO][5044] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.75.64/26 host="172-238-189-76" Apr 17 23:40:17.454660 containerd[1456]: 2026-04-17 23:40:17.386 [INFO][5044] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.75.64/26 handle="k8s-pod-network.3ec1dbb8c66e0bcc875a96e882e365c51dca4555c39237c1581eeb76ce66eeb2" host="172-238-189-76" Apr 17 23:40:17.454660 containerd[1456]: 2026-04-17 23:40:17.388 [INFO][5044] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3ec1dbb8c66e0bcc875a96e882e365c51dca4555c39237c1581eeb76ce66eeb2 Apr 17 23:40:17.454660 containerd[1456]: 2026-04-17 23:40:17.392 [INFO][5044] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.75.64/26 handle="k8s-pod-network.3ec1dbb8c66e0bcc875a96e882e365c51dca4555c39237c1581eeb76ce66eeb2" host="172-238-189-76" Apr 17 23:40:17.454660 containerd[1456]: 2026-04-17 23:40:17.398 [INFO][5044] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.75.72/26] block=192.168.75.64/26 handle="k8s-pod-network.3ec1dbb8c66e0bcc875a96e882e365c51dca4555c39237c1581eeb76ce66eeb2" host="172-238-189-76" Apr 17 23:40:17.454660 containerd[1456]: 2026-04-17 23:40:17.398 [INFO][5044] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.75.72/26] handle="k8s-pod-network.3ec1dbb8c66e0bcc875a96e882e365c51dca4555c39237c1581eeb76ce66eeb2" host="172-238-189-76" Apr 17 23:40:17.454660 containerd[1456]: 2026-04-17 23:40:17.398 [INFO][5044] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:17.454660 containerd[1456]: 2026-04-17 23:40:17.398 [INFO][5044] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.75.72/26] IPv6=[] ContainerID="3ec1dbb8c66e0bcc875a96e882e365c51dca4555c39237c1581eeb76ce66eeb2" HandleID="k8s-pod-network.3ec1dbb8c66e0bcc875a96e882e365c51dca4555c39237c1581eeb76ce66eeb2" Workload="172--238--189--76-k8s-coredns--66bc5c9577--2kcx5-eth0" Apr 17 23:40:17.455344 containerd[1456]: 2026-04-17 23:40:17.408 [INFO][5009] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ec1dbb8c66e0bcc875a96e882e365c51dca4555c39237c1581eeb76ce66eeb2" Namespace="kube-system" Pod="coredns-66bc5c9577-2kcx5" WorkloadEndpoint="172--238--189--76-k8s-coredns--66bc5c9577--2kcx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-coredns--66bc5c9577--2kcx5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3b2079f5-4ea9-4796-8181-3d79d0da7db2", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"", Pod:"coredns-66bc5c9577-2kcx5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid1358daf144", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:17.455344 containerd[1456]: 2026-04-17 23:40:17.408 [INFO][5009] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.72/32] ContainerID="3ec1dbb8c66e0bcc875a96e882e365c51dca4555c39237c1581eeb76ce66eeb2" Namespace="kube-system" Pod="coredns-66bc5c9577-2kcx5" WorkloadEndpoint="172--238--189--76-k8s-coredns--66bc5c9577--2kcx5-eth0" Apr 17 23:40:17.455344 containerd[1456]: 2026-04-17 23:40:17.408 [INFO][5009] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid1358daf144 ContainerID="3ec1dbb8c66e0bcc875a96e882e365c51dca4555c39237c1581eeb76ce66eeb2" Namespace="kube-system" Pod="coredns-66bc5c9577-2kcx5" WorkloadEndpoint="172--238--189--76-k8s-coredns--66bc5c9577--2kcx5-eth0" Apr 17 23:40:17.455344 containerd[1456]: 2026-04-17 23:40:17.420 [INFO][5009] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ec1dbb8c66e0bcc875a96e882e365c51dca4555c39237c1581eeb76ce66eeb2" Namespace="kube-system" Pod="coredns-66bc5c9577-2kcx5" WorkloadEndpoint="172--238--189--76-k8s-coredns--66bc5c9577--2kcx5-eth0" Apr 17 23:40:17.455344 containerd[1456]: 2026-04-17 23:40:17.425 [INFO][5009] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ec1dbb8c66e0bcc875a96e882e365c51dca4555c39237c1581eeb76ce66eeb2" Namespace="kube-system" Pod="coredns-66bc5c9577-2kcx5" WorkloadEndpoint="172--238--189--76-k8s-coredns--66bc5c9577--2kcx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-coredns--66bc5c9577--2kcx5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3b2079f5-4ea9-4796-8181-3d79d0da7db2", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"3ec1dbb8c66e0bcc875a96e882e365c51dca4555c39237c1581eeb76ce66eeb2", Pod:"coredns-66bc5c9577-2kcx5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid1358daf144", MAC:"fe:9b:c0:3c:ed:e0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:17.455344 containerd[1456]: 2026-04-17 23:40:17.445 [INFO][5009] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ec1dbb8c66e0bcc875a96e882e365c51dca4555c39237c1581eeb76ce66eeb2" Namespace="kube-system" Pod="coredns-66bc5c9577-2kcx5" WorkloadEndpoint="172--238--189--76-k8s-coredns--66bc5c9577--2kcx5-eth0" Apr 17 23:40:17.510493 containerd[1456]: time="2026-04-17T23:40:17.510461855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9m4mr,Uid:7519db54-398f-4489-8839-90013af059d5,Namespace:calico-system,Attempt:1,} returns sandbox id \"9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8\"" Apr 17 23:40:17.529975 containerd[1456]: time="2026-04-17T23:40:17.527733008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:40:17.529975 containerd[1456]: time="2026-04-17T23:40:17.527804088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:40:17.529975 containerd[1456]: time="2026-04-17T23:40:17.527817588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:17.529975 containerd[1456]: time="2026-04-17T23:40:17.527916938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:17.582829 systemd[1]: Started cri-containerd-3ec1dbb8c66e0bcc875a96e882e365c51dca4555c39237c1581eeb76ce66eeb2.scope - libcontainer container 3ec1dbb8c66e0bcc875a96e882e365c51dca4555c39237c1581eeb76ce66eeb2. Apr 17 23:40:17.651225 containerd[1456]: time="2026-04-17T23:40:17.649848027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2kcx5,Uid:3b2079f5-4ea9-4796-8181-3d79d0da7db2,Namespace:kube-system,Attempt:1,} returns sandbox id \"3ec1dbb8c66e0bcc875a96e882e365c51dca4555c39237c1581eeb76ce66eeb2\"" Apr 17 23:40:17.655538 kubelet[2540]: E0417 23:40:17.654913 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:40:17.665879 containerd[1456]: time="2026-04-17T23:40:17.664983401Z" level=info msg="CreateContainer within sandbox \"3ec1dbb8c66e0bcc875a96e882e365c51dca4555c39237c1581eeb76ce66eeb2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:40:17.691528 containerd[1456]: time="2026-04-17T23:40:17.691482130Z" level=info msg="CreateContainer within sandbox \"3ec1dbb8c66e0bcc875a96e882e365c51dca4555c39237c1581eeb76ce66eeb2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"74a40652ac752b48901fdf3b176238e82a28da54418b0f9aab98f11e2bb67ce6\"" Apr 17 23:40:17.693485 containerd[1456]: time="2026-04-17T23:40:17.693296059Z" level=info msg="StartContainer for \"74a40652ac752b48901fdf3b176238e82a28da54418b0f9aab98f11e2bb67ce6\"" Apr 17 23:40:17.776157 systemd[1]: Started cri-containerd-74a40652ac752b48901fdf3b176238e82a28da54418b0f9aab98f11e2bb67ce6.scope - libcontainer container 74a40652ac752b48901fdf3b176238e82a28da54418b0f9aab98f11e2bb67ce6. Apr 17 23:40:17.830420 containerd[1456]: time="2026-04-17T23:40:17.830378862Z" level=info msg="StartContainer for \"74a40652ac752b48901fdf3b176238e82a28da54418b0f9aab98f11e2bb67ce6\" returns successfully" Apr 17 23:40:18.027518 kubelet[2540]: E0417 23:40:18.026768 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:40:18.034290 kubelet[2540]: E0417 23:40:18.032107 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:40:18.049387 kubelet[2540]: I0417 23:40:18.049345 2540 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2kcx5" podStartSLOduration=36.049332422 podStartE2EDuration="36.049332422s" podCreationTimestamp="2026-04-17 23:39:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:40:18.049054932 +0000 UTC m=+41.382952299" watchObservedRunningTime="2026-04-17 23:40:18.049332422 +0000 UTC m=+41.383229789" Apr 17 23:40:18.159541 systemd-networkd[1376]: vxlan.calico: Link UP Apr 17 23:40:18.159549 systemd-networkd[1376]: vxlan.calico: Gained carrier Apr 17 23:40:18.253657 systemd-networkd[1376]: califfc2c550906: Gained IPv6LL Apr 17 23:40:18.302980 containerd[1456]: time="2026-04-17T23:40:18.302921493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:18.304021 containerd[1456]: time="2026-04-17T23:40:18.303982423Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 17 23:40:18.304641 containerd[1456]: time="2026-04-17T23:40:18.304611173Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:18.306514 containerd[1456]: time="2026-04-17T23:40:18.306482502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:18.307822 containerd[1456]: time="2026-04-17T23:40:18.307789201Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.553201655s" Apr 17 23:40:18.307851 containerd[1456]: time="2026-04-17T23:40:18.307820751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 17 23:40:18.309171 containerd[1456]: time="2026-04-17T23:40:18.309145881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:40:18.333485 containerd[1456]: time="2026-04-17T23:40:18.333399092Z" level=info msg="CreateContainer within sandbox \"2ca52704a28ec80e90353ecffeabb9fe0b1393210ed4c0f1dd4fee73f627e494\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 17 23:40:18.350718 containerd[1456]: time="2026-04-17T23:40:18.350667435Z" level=info msg="CreateContainer within sandbox \"2ca52704a28ec80e90353ecffeabb9fe0b1393210ed4c0f1dd4fee73f627e494\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ddb86888043b0e5cab9d0573394ec307337b54dc55e4b656b3e6922f38bc451e\"" Apr 17 23:40:18.352491 containerd[1456]: time="2026-04-17T23:40:18.352460014Z" level=info msg="StartContainer for \"ddb86888043b0e5cab9d0573394ec307337b54dc55e4b656b3e6922f38bc451e\"" Apr 17 23:40:18.391850 systemd[1]: Started cri-containerd-ddb86888043b0e5cab9d0573394ec307337b54dc55e4b656b3e6922f38bc451e.scope - libcontainer container ddb86888043b0e5cab9d0573394ec307337b54dc55e4b656b3e6922f38bc451e. Apr 17 23:40:18.439788 containerd[1456]: time="2026-04-17T23:40:18.439227160Z" level=info msg="StartContainer for \"ddb86888043b0e5cab9d0573394ec307337b54dc55e4b656b3e6922f38bc451e\" returns successfully" Apr 17 23:40:18.828866 systemd-networkd[1376]: calid1358daf144: Gained IPv6LL Apr 17 23:40:19.043569 kubelet[2540]: E0417 23:40:19.043516 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:40:19.137021 kubelet[2540]: I0417 23:40:19.136425 2540 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d5d6df97d-qjfqb" podStartSLOduration=24.263715169 podStartE2EDuration="28.136382312s" podCreationTimestamp="2026-04-17 23:39:51 +0000 UTC" firstStartedPulling="2026-04-17 23:40:14.435849398 +0000 UTC m=+37.769746765" lastFinishedPulling="2026-04-17 23:40:18.308516541 +0000 UTC m=+41.642413908" observedRunningTime="2026-04-17 23:40:19.062533089 +0000 UTC m=+42.396430486" watchObservedRunningTime="2026-04-17 23:40:19.136382312 +0000 UTC m=+42.470279679" Apr 17 23:40:19.845175 containerd[1456]: time="2026-04-17T23:40:19.844937503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:19.846647 containerd[1456]: time="2026-04-17T23:40:19.846244423Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 17 23:40:19.846647 containerd[1456]: time="2026-04-17T23:40:19.846610033Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:19.849621 containerd[1456]: time="2026-04-17T23:40:19.848586922Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:19.849621 containerd[1456]: time="2026-04-17T23:40:19.849514451Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.540341661s" Apr 17 23:40:19.849621 containerd[1456]: time="2026-04-17T23:40:19.849539071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:40:19.851595 containerd[1456]: time="2026-04-17T23:40:19.851578131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:40:19.854301 containerd[1456]: time="2026-04-17T23:40:19.854273190Z" level=info msg="CreateContainer within sandbox \"20244cad104fedd5a1b6c253e3a236e073902ed78b3011430e560dbf25fa7082\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:40:19.870686 containerd[1456]: time="2026-04-17T23:40:19.870647674Z" level=info msg="CreateContainer within sandbox \"20244cad104fedd5a1b6c253e3a236e073902ed78b3011430e560dbf25fa7082\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ebbf36b666091f59a75ca666fd06aa2c76ac7549fbcf83d8f07cf970e71fd1f2\"" Apr 17 23:40:19.871136 containerd[1456]: time="2026-04-17T23:40:19.871106224Z" level=info msg="StartContainer for \"ebbf36b666091f59a75ca666fd06aa2c76ac7549fbcf83d8f07cf970e71fd1f2\"" Apr 17 23:40:19.920823 systemd[1]: Started cri-containerd-ebbf36b666091f59a75ca666fd06aa2c76ac7549fbcf83d8f07cf970e71fd1f2.scope - libcontainer container ebbf36b666091f59a75ca666fd06aa2c76ac7549fbcf83d8f07cf970e71fd1f2. Apr 17 23:40:19.972887 containerd[1456]: time="2026-04-17T23:40:19.972722736Z" level=info msg="StartContainer for \"ebbf36b666091f59a75ca666fd06aa2c76ac7549fbcf83d8f07cf970e71fd1f2\" returns successfully" Apr 17 23:40:20.015432 containerd[1456]: time="2026-04-17T23:40:20.015388301Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:20.017232 containerd[1456]: time="2026-04-17T23:40:20.017189571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 17 23:40:20.019075 containerd[1456]: time="2026-04-17T23:40:20.019045040Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 167.279619ms" Apr 17 23:40:20.019109 containerd[1456]: time="2026-04-17T23:40:20.019092750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:40:20.020941 containerd[1456]: time="2026-04-17T23:40:20.020907399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 17 23:40:20.025444 containerd[1456]: time="2026-04-17T23:40:20.025403658Z" level=info msg="CreateContainer within sandbox \"a1cd4d089791bde480b9dead5c623707de7b97ec6125452b0dc122d51046ded2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:40:20.036885 containerd[1456]: time="2026-04-17T23:40:20.036813154Z" level=info msg="CreateContainer within sandbox \"a1cd4d089791bde480b9dead5c623707de7b97ec6125452b0dc122d51046ded2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7e7cabe2b78dca22ac77aff16f7c69fb652887772617a24924a4e858d3db162c\"" Apr 17 23:40:20.039377 containerd[1456]: time="2026-04-17T23:40:20.039349533Z" level=info msg="StartContainer for \"7e7cabe2b78dca22ac77aff16f7c69fb652887772617a24924a4e858d3db162c\"" Apr 17 23:40:20.068145 kubelet[2540]: E0417 23:40:20.067921 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:40:20.111791 systemd-networkd[1376]: vxlan.calico: Gained IPv6LL Apr 17 23:40:20.124418 systemd[1]: Started cri-containerd-7e7cabe2b78dca22ac77aff16f7c69fb652887772617a24924a4e858d3db162c.scope - libcontainer container 7e7cabe2b78dca22ac77aff16f7c69fb652887772617a24924a4e858d3db162c. Apr 17 23:40:20.190614 containerd[1456]: time="2026-04-17T23:40:20.190574201Z" level=info msg="StartContainer for \"7e7cabe2b78dca22ac77aff16f7c69fb652887772617a24924a4e858d3db162c\" returns successfully" Apr 17 23:40:20.796147 containerd[1456]: time="2026-04-17T23:40:20.795409594Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:20.796737 containerd[1456]: time="2026-04-17T23:40:20.796682854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 17 23:40:20.797370 containerd[1456]: time="2026-04-17T23:40:20.797348764Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:20.800013 containerd[1456]: time="2026-04-17T23:40:20.799993603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:20.801361 containerd[1456]: time="2026-04-17T23:40:20.801341042Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 780.405313ms" Apr 17 23:40:20.801453 containerd[1456]: time="2026-04-17T23:40:20.801435222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 17 23:40:20.806807 containerd[1456]: time="2026-04-17T23:40:20.806785620Z" level=info msg="CreateContainer within sandbox \"9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 17 23:40:20.827745 containerd[1456]: time="2026-04-17T23:40:20.827119093Z" level=info msg="CreateContainer within sandbox \"9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8e8b4e9d33008dc9cd2e35a29ca3d00e95f71646c0c7345f2b8076000a0fb050\"" Apr 17 23:40:20.829372 containerd[1456]: time="2026-04-17T23:40:20.829351553Z" level=info msg="StartContainer for \"8e8b4e9d33008dc9cd2e35a29ca3d00e95f71646c0c7345f2b8076000a0fb050\"" Apr 17 23:40:20.872237 systemd[1]: Started cri-containerd-8e8b4e9d33008dc9cd2e35a29ca3d00e95f71646c0c7345f2b8076000a0fb050.scope - libcontainer container 8e8b4e9d33008dc9cd2e35a29ca3d00e95f71646c0c7345f2b8076000a0fb050. Apr 17 23:40:20.943081 containerd[1456]: time="2026-04-17T23:40:20.942836104Z" level=info msg="StartContainer for \"8e8b4e9d33008dc9cd2e35a29ca3d00e95f71646c0c7345f2b8076000a0fb050\" returns successfully" Apr 17 23:40:20.946380 containerd[1456]: time="2026-04-17T23:40:20.946186103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 17 23:40:21.082624 kubelet[2540]: I0417 23:40:21.082512 2540 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-747d4d9564-p57qj" podStartSLOduration=25.761272657 podStartE2EDuration="31.082499978s" podCreationTimestamp="2026-04-17 23:39:50 +0000 UTC" firstStartedPulling="2026-04-17 23:40:14.52960917 +0000 UTC m=+37.863506537" lastFinishedPulling="2026-04-17 23:40:19.850836481 +0000 UTC m=+43.184733858" observedRunningTime="2026-04-17 23:40:20.071499512 +0000 UTC m=+43.405396879" watchObservedRunningTime="2026-04-17 23:40:21.082499978 +0000 UTC m=+44.416397355" Apr 17 23:40:21.394194 kubelet[2540]: I0417 23:40:21.393448 2540 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-747d4d9564-87p66" podStartSLOduration=26.688622232 podStartE2EDuration="31.393433758s" podCreationTimestamp="2026-04-17 23:39:50 +0000 UTC" firstStartedPulling="2026-04-17 23:40:15.315231814 +0000 UTC m=+38.649129181" lastFinishedPulling="2026-04-17 23:40:20.02004333 +0000 UTC m=+43.353940707" observedRunningTime="2026-04-17 23:40:21.083744937 +0000 UTC m=+44.417642304" watchObservedRunningTime="2026-04-17 23:40:21.393433758 +0000 UTC m=+44.727331125" Apr 17 23:40:21.859324 containerd[1456]: time="2026-04-17T23:40:21.858485069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:21.859324 containerd[1456]: time="2026-04-17T23:40:21.859290108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 17 23:40:21.859502 containerd[1456]: time="2026-04-17T23:40:21.859453838Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:21.863348 containerd[1456]: time="2026-04-17T23:40:21.861478968Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:21.863348 containerd[1456]: time="2026-04-17T23:40:21.862247857Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 916.034544ms" Apr 17 23:40:21.863348 containerd[1456]: time="2026-04-17T23:40:21.862272717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 17 23:40:21.870747 containerd[1456]: time="2026-04-17T23:40:21.870565065Z" level=info msg="CreateContainer within sandbox \"9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 17 23:40:21.894409 containerd[1456]: time="2026-04-17T23:40:21.894384787Z" level=info msg="CreateContainer within sandbox \"9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"30de2a16ce7852910178bbd69d11016c0b8da3acc622f8a77719b3f0be990c99\"" Apr 17 23:40:21.896071 containerd[1456]: time="2026-04-17T23:40:21.896028047Z" level=info msg="StartContainer for \"30de2a16ce7852910178bbd69d11016c0b8da3acc622f8a77719b3f0be990c99\"" Apr 17 23:40:21.944859 systemd[1]: Started cri-containerd-30de2a16ce7852910178bbd69d11016c0b8da3acc622f8a77719b3f0be990c99.scope - libcontainer container 30de2a16ce7852910178bbd69d11016c0b8da3acc622f8a77719b3f0be990c99. Apr 17 23:40:21.979307 containerd[1456]: time="2026-04-17T23:40:21.979226340Z" level=info msg="StartContainer for \"30de2a16ce7852910178bbd69d11016c0b8da3acc622f8a77719b3f0be990c99\" returns successfully" Apr 17 23:40:22.077330 kubelet[2540]: I0417 23:40:22.077307 2540 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:40:22.088549 kubelet[2540]: I0417 23:40:22.088168 2540 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9m4mr" podStartSLOduration=26.735865794 podStartE2EDuration="31.088154497s" podCreationTimestamp="2026-04-17 23:39:51 +0000 UTC" firstStartedPulling="2026-04-17 23:40:17.511894434 +0000 UTC m=+40.845791801" lastFinishedPulling="2026-04-17 23:40:21.864183127 +0000 UTC m=+45.198080504" observedRunningTime="2026-04-17 23:40:22.087890327 +0000 UTC m=+45.421787704" watchObservedRunningTime="2026-04-17 23:40:22.088154497 +0000 UTC m=+45.422051864" Apr 17 23:40:22.838060 kubelet[2540]: I0417 23:40:22.838032 2540 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 17 23:40:22.839215 kubelet[2540]: I0417 23:40:22.839200 2540 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 17 23:40:28.477292 kubelet[2540]: I0417 23:40:28.476916 2540 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:40:36.741254 containerd[1456]: time="2026-04-17T23:40:36.741152726Z" level=info msg="StopPodSandbox for \"918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046\"" Apr 17 23:40:36.826814 containerd[1456]: 2026-04-17 23:40:36.784 [WARNING][5579] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-calico--apiserver--747d4d9564--p57qj-eth0", GenerateName:"calico-apiserver-747d4d9564-", Namespace:"calico-system", SelfLink:"", UID:"a9d870f4-95e3-4941-9a77-e1b80afca9bd", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747d4d9564", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"20244cad104fedd5a1b6c253e3a236e073902ed78b3011430e560dbf25fa7082", Pod:"calico-apiserver-747d4d9564-p57qj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibc751cbdbd2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:36.826814 containerd[1456]: 2026-04-17 23:40:36.785 [INFO][5579] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" Apr 17 23:40:36.826814 containerd[1456]: 2026-04-17 23:40:36.785 [INFO][5579] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" iface="eth0" netns="" Apr 17 23:40:36.826814 containerd[1456]: 2026-04-17 23:40:36.785 [INFO][5579] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" Apr 17 23:40:36.826814 containerd[1456]: 2026-04-17 23:40:36.785 [INFO][5579] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" Apr 17 23:40:36.826814 containerd[1456]: 2026-04-17 23:40:36.809 [INFO][5589] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" HandleID="k8s-pod-network.918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" Workload="172--238--189--76-k8s-calico--apiserver--747d4d9564--p57qj-eth0" Apr 17 23:40:36.826814 containerd[1456]: 2026-04-17 23:40:36.809 [INFO][5589] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:36.826814 containerd[1456]: 2026-04-17 23:40:36.809 [INFO][5589] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:36.826814 containerd[1456]: 2026-04-17 23:40:36.815 [WARNING][5589] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" HandleID="k8s-pod-network.918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" Workload="172--238--189--76-k8s-calico--apiserver--747d4d9564--p57qj-eth0" Apr 17 23:40:36.826814 containerd[1456]: 2026-04-17 23:40:36.816 [INFO][5589] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" HandleID="k8s-pod-network.918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" Workload="172--238--189--76-k8s-calico--apiserver--747d4d9564--p57qj-eth0" Apr 17 23:40:36.826814 containerd[1456]: 2026-04-17 23:40:36.817 [INFO][5589] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:36.826814 containerd[1456]: 2026-04-17 23:40:36.821 [INFO][5579] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" Apr 17 23:40:36.826814 containerd[1456]: time="2026-04-17T23:40:36.826678786Z" level=info msg="TearDown network for sandbox \"918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046\" successfully" Apr 17 23:40:36.826814 containerd[1456]: time="2026-04-17T23:40:36.826735076Z" level=info msg="StopPodSandbox for \"918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046\" returns successfully" Apr 17 23:40:36.827595 containerd[1456]: time="2026-04-17T23:40:36.827566736Z" level=info msg="RemovePodSandbox for \"918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046\"" Apr 17 23:40:36.827623 containerd[1456]: time="2026-04-17T23:40:36.827600336Z" level=info msg="Forcibly stopping sandbox \"918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046\"" Apr 17 23:40:36.905424 containerd[1456]: 2026-04-17 23:40:36.865 [WARNING][5604] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-calico--apiserver--747d4d9564--p57qj-eth0", GenerateName:"calico-apiserver-747d4d9564-", Namespace:"calico-system", SelfLink:"", UID:"a9d870f4-95e3-4941-9a77-e1b80afca9bd", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747d4d9564", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"20244cad104fedd5a1b6c253e3a236e073902ed78b3011430e560dbf25fa7082", Pod:"calico-apiserver-747d4d9564-p57qj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibc751cbdbd2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:36.905424 containerd[1456]: 2026-04-17 23:40:36.865 [INFO][5604] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" Apr 17 23:40:36.905424 containerd[1456]: 2026-04-17 23:40:36.865 [INFO][5604] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" iface="eth0" netns="" Apr 17 23:40:36.905424 containerd[1456]: 2026-04-17 23:40:36.866 [INFO][5604] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" Apr 17 23:40:36.905424 containerd[1456]: 2026-04-17 23:40:36.866 [INFO][5604] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" Apr 17 23:40:36.905424 containerd[1456]: 2026-04-17 23:40:36.888 [INFO][5611] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" HandleID="k8s-pod-network.918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" Workload="172--238--189--76-k8s-calico--apiserver--747d4d9564--p57qj-eth0" Apr 17 23:40:36.905424 containerd[1456]: 2026-04-17 23:40:36.888 [INFO][5611] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:36.905424 containerd[1456]: 2026-04-17 23:40:36.888 [INFO][5611] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:36.905424 containerd[1456]: 2026-04-17 23:40:36.894 [WARNING][5611] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" HandleID="k8s-pod-network.918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" Workload="172--238--189--76-k8s-calico--apiserver--747d4d9564--p57qj-eth0" Apr 17 23:40:36.905424 containerd[1456]: 2026-04-17 23:40:36.894 [INFO][5611] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" HandleID="k8s-pod-network.918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" Workload="172--238--189--76-k8s-calico--apiserver--747d4d9564--p57qj-eth0" Apr 17 23:40:36.905424 containerd[1456]: 2026-04-17 23:40:36.899 [INFO][5611] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:36.905424 containerd[1456]: 2026-04-17 23:40:36.902 [INFO][5604] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046" Apr 17 23:40:36.905949 containerd[1456]: time="2026-04-17T23:40:36.905464426Z" level=info msg="TearDown network for sandbox \"918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046\" successfully" Apr 17 23:40:36.910183 containerd[1456]: time="2026-04-17T23:40:36.910142326Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:40:36.910324 containerd[1456]: time="2026-04-17T23:40:36.910216096Z" level=info msg="RemovePodSandbox \"918c1dcc5f79eaeefc0b87c037a408c74c000765df7103ed62d398b0a3b81046\" returns successfully" Apr 17 23:40:36.910798 containerd[1456]: time="2026-04-17T23:40:36.910743286Z" level=info msg="StopPodSandbox for \"7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378\"" Apr 17 23:40:36.979777 containerd[1456]: 2026-04-17 23:40:36.944 [WARNING][5625] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-goldmane--cccfbd5cf--hxd5z-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"e92690a1-a6c6-4a96-8b33-2a2ebd323317", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"efe2704fe93c282c1bccd84fe85b5d96f217aa74848b1aec1c279edb28f08978", Pod:"goldmane-cccfbd5cf-hxd5z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0c7c3c5d267", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:36.979777 containerd[1456]: 2026-04-17 23:40:36.944 [INFO][5625] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" Apr 17 23:40:36.979777 containerd[1456]: 2026-04-17 23:40:36.944 [INFO][5625] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" iface="eth0" netns="" Apr 17 23:40:36.979777 containerd[1456]: 2026-04-17 23:40:36.944 [INFO][5625] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" Apr 17 23:40:36.979777 containerd[1456]: 2026-04-17 23:40:36.944 [INFO][5625] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" Apr 17 23:40:36.979777 containerd[1456]: 2026-04-17 23:40:36.965 [INFO][5632] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" HandleID="k8s-pod-network.7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" Workload="172--238--189--76-k8s-goldmane--cccfbd5cf--hxd5z-eth0" Apr 17 23:40:36.979777 containerd[1456]: 2026-04-17 23:40:36.966 [INFO][5632] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:36.979777 containerd[1456]: 2026-04-17 23:40:36.966 [INFO][5632] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:36.979777 containerd[1456]: 2026-04-17 23:40:36.973 [WARNING][5632] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" HandleID="k8s-pod-network.7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" Workload="172--238--189--76-k8s-goldmane--cccfbd5cf--hxd5z-eth0" Apr 17 23:40:36.979777 containerd[1456]: 2026-04-17 23:40:36.973 [INFO][5632] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" HandleID="k8s-pod-network.7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" Workload="172--238--189--76-k8s-goldmane--cccfbd5cf--hxd5z-eth0" Apr 17 23:40:36.979777 containerd[1456]: 2026-04-17 23:40:36.974 [INFO][5632] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:36.979777 containerd[1456]: 2026-04-17 23:40:36.977 [INFO][5625] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" Apr 17 23:40:36.980441 containerd[1456]: time="2026-04-17T23:40:36.979814097Z" level=info msg="TearDown network for sandbox \"7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378\" successfully" Apr 17 23:40:36.980441 containerd[1456]: time="2026-04-17T23:40:36.979838147Z" level=info msg="StopPodSandbox for \"7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378\" returns successfully" Apr 17 23:40:36.980441 containerd[1456]: time="2026-04-17T23:40:36.980216057Z" level=info msg="RemovePodSandbox for \"7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378\"" Apr 17 23:40:36.980441 containerd[1456]: time="2026-04-17T23:40:36.980237847Z" level=info msg="Forcibly stopping sandbox \"7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378\"" Apr 17 23:40:37.053740 containerd[1456]: 2026-04-17 23:40:37.015 [WARNING][5646] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-goldmane--cccfbd5cf--hxd5z-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"e92690a1-a6c6-4a96-8b33-2a2ebd323317", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"efe2704fe93c282c1bccd84fe85b5d96f217aa74848b1aec1c279edb28f08978", Pod:"goldmane-cccfbd5cf-hxd5z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0c7c3c5d267", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:37.053740 containerd[1456]: 2026-04-17 23:40:37.015 [INFO][5646] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" Apr 17 23:40:37.053740 containerd[1456]: 2026-04-17 23:40:37.015 [INFO][5646] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" iface="eth0" netns="" Apr 17 23:40:37.053740 containerd[1456]: 2026-04-17 23:40:37.015 [INFO][5646] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" Apr 17 23:40:37.053740 containerd[1456]: 2026-04-17 23:40:37.015 [INFO][5646] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" Apr 17 23:40:37.053740 containerd[1456]: 2026-04-17 23:40:37.036 [INFO][5653] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" HandleID="k8s-pod-network.7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" Workload="172--238--189--76-k8s-goldmane--cccfbd5cf--hxd5z-eth0" Apr 17 23:40:37.053740 containerd[1456]: 2026-04-17 23:40:37.036 [INFO][5653] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:37.053740 containerd[1456]: 2026-04-17 23:40:37.037 [INFO][5653] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:37.053740 containerd[1456]: 2026-04-17 23:40:37.042 [WARNING][5653] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" HandleID="k8s-pod-network.7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" Workload="172--238--189--76-k8s-goldmane--cccfbd5cf--hxd5z-eth0" Apr 17 23:40:37.053740 containerd[1456]: 2026-04-17 23:40:37.042 [INFO][5653] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" HandleID="k8s-pod-network.7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" Workload="172--238--189--76-k8s-goldmane--cccfbd5cf--hxd5z-eth0" Apr 17 23:40:37.053740 containerd[1456]: 2026-04-17 23:40:37.044 [INFO][5653] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:37.053740 containerd[1456]: 2026-04-17 23:40:37.049 [INFO][5646] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378" Apr 17 23:40:37.054103 containerd[1456]: time="2026-04-17T23:40:37.053768269Z" level=info msg="TearDown network for sandbox \"7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378\" successfully" Apr 17 23:40:37.058572 containerd[1456]: time="2026-04-17T23:40:37.058469858Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:40:37.058572 containerd[1456]: time="2026-04-17T23:40:37.058534178Z" level=info msg="RemovePodSandbox \"7eeed4542b0a99707bb88695dc20e42985f0496072f1cef71f0562977c69a378\" returns successfully" Apr 17 23:40:37.059804 containerd[1456]: time="2026-04-17T23:40:37.059763988Z" level=info msg="StopPodSandbox for \"3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604\"" Apr 17 23:40:37.145475 containerd[1456]: 2026-04-17 23:40:37.106 [WARNING][5667] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" WorkloadEndpoint="172--238--189--76-k8s-whisker--69974dd7c9--fzcgt-eth0" Apr 17 23:40:37.145475 containerd[1456]: 2026-04-17 23:40:37.106 [INFO][5667] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" Apr 17 23:40:37.145475 containerd[1456]: 2026-04-17 23:40:37.106 [INFO][5667] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" iface="eth0" netns="" Apr 17 23:40:37.145475 containerd[1456]: 2026-04-17 23:40:37.106 [INFO][5667] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" Apr 17 23:40:37.145475 containerd[1456]: 2026-04-17 23:40:37.106 [INFO][5667] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" Apr 17 23:40:37.145475 containerd[1456]: 2026-04-17 23:40:37.132 [INFO][5674] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" HandleID="k8s-pod-network.3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" Workload="172--238--189--76-k8s-whisker--69974dd7c9--fzcgt-eth0" Apr 17 23:40:37.145475 containerd[1456]: 2026-04-17 23:40:37.132 [INFO][5674] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:37.145475 containerd[1456]: 2026-04-17 23:40:37.132 [INFO][5674] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:37.145475 containerd[1456]: 2026-04-17 23:40:37.138 [WARNING][5674] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" HandleID="k8s-pod-network.3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" Workload="172--238--189--76-k8s-whisker--69974dd7c9--fzcgt-eth0" Apr 17 23:40:37.145475 containerd[1456]: 2026-04-17 23:40:37.138 [INFO][5674] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" HandleID="k8s-pod-network.3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" Workload="172--238--189--76-k8s-whisker--69974dd7c9--fzcgt-eth0" Apr 17 23:40:37.145475 containerd[1456]: 2026-04-17 23:40:37.139 [INFO][5674] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:37.145475 containerd[1456]: 2026-04-17 23:40:37.142 [INFO][5667] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" Apr 17 23:40:37.145475 containerd[1456]: time="2026-04-17T23:40:37.145390198Z" level=info msg="TearDown network for sandbox \"3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604\" successfully" Apr 17 23:40:37.145475 containerd[1456]: time="2026-04-17T23:40:37.145412258Z" level=info msg="StopPodSandbox for \"3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604\" returns successfully" Apr 17 23:40:37.146277 containerd[1456]: time="2026-04-17T23:40:37.145718828Z" level=info msg="RemovePodSandbox for \"3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604\"" Apr 17 23:40:37.146277 containerd[1456]: time="2026-04-17T23:40:37.146259488Z" level=info msg="Forcibly stopping sandbox \"3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604\"" Apr 17 23:40:37.213425 containerd[1456]: 2026-04-17 23:40:37.178 [WARNING][5689] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" WorkloadEndpoint="172--238--189--76-k8s-whisker--69974dd7c9--fzcgt-eth0" Apr 17 23:40:37.213425 containerd[1456]: 2026-04-17 23:40:37.178 [INFO][5689] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" Apr 17 23:40:37.213425 containerd[1456]: 2026-04-17 23:40:37.179 [INFO][5689] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" iface="eth0" netns="" Apr 17 23:40:37.213425 containerd[1456]: 2026-04-17 23:40:37.179 [INFO][5689] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" Apr 17 23:40:37.213425 containerd[1456]: 2026-04-17 23:40:37.179 [INFO][5689] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" Apr 17 23:40:37.213425 containerd[1456]: 2026-04-17 23:40:37.201 [INFO][5697] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" HandleID="k8s-pod-network.3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" Workload="172--238--189--76-k8s-whisker--69974dd7c9--fzcgt-eth0" Apr 17 23:40:37.213425 containerd[1456]: 2026-04-17 23:40:37.201 [INFO][5697] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:37.213425 containerd[1456]: 2026-04-17 23:40:37.201 [INFO][5697] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:37.213425 containerd[1456]: 2026-04-17 23:40:37.206 [WARNING][5697] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" HandleID="k8s-pod-network.3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" Workload="172--238--189--76-k8s-whisker--69974dd7c9--fzcgt-eth0" Apr 17 23:40:37.213425 containerd[1456]: 2026-04-17 23:40:37.206 [INFO][5697] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" HandleID="k8s-pod-network.3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" Workload="172--238--189--76-k8s-whisker--69974dd7c9--fzcgt-eth0" Apr 17 23:40:37.213425 containerd[1456]: 2026-04-17 23:40:37.208 [INFO][5697] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:37.213425 containerd[1456]: 2026-04-17 23:40:37.210 [INFO][5689] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604" Apr 17 23:40:37.213789 containerd[1456]: time="2026-04-17T23:40:37.213474631Z" level=info msg="TearDown network for sandbox \"3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604\" successfully" Apr 17 23:40:37.216808 containerd[1456]: time="2026-04-17T23:40:37.216778240Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:40:37.216889 containerd[1456]: time="2026-04-17T23:40:37.216829960Z" level=info msg="RemovePodSandbox \"3adeed4a9f4f9f0f87446a2c03ea31818425fa3d391e7db7c28c60acba7b5604\" returns successfully" Apr 17 23:40:37.217200 containerd[1456]: time="2026-04-17T23:40:37.217174940Z" level=info msg="StopPodSandbox for \"6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416\"" Apr 17 23:40:37.289181 containerd[1456]: 2026-04-17 23:40:37.253 [WARNING][5711] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-coredns--66bc5c9577--lh2x8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"98a1a605-7c5c-4f5c-801d-322ef1144e09", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"ed287fbb525078d5629b84757676a08856533305147479f8af0557bb00bb2558", Pod:"coredns-66bc5c9577-lh2x8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali171c3f9e0ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:37.289181 containerd[1456]: 2026-04-17 23:40:37.254 [INFO][5711] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" Apr 17 23:40:37.289181 containerd[1456]: 2026-04-17 23:40:37.254 [INFO][5711] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" iface="eth0" netns="" Apr 17 23:40:37.289181 containerd[1456]: 2026-04-17 23:40:37.254 [INFO][5711] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" Apr 17 23:40:37.289181 containerd[1456]: 2026-04-17 23:40:37.254 [INFO][5711] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" Apr 17 23:40:37.289181 containerd[1456]: 2026-04-17 23:40:37.276 [INFO][5718] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" HandleID="k8s-pod-network.6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" Workload="172--238--189--76-k8s-coredns--66bc5c9577--lh2x8-eth0" Apr 17 23:40:37.289181 containerd[1456]: 2026-04-17 23:40:37.276 [INFO][5718] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:37.289181 containerd[1456]: 2026-04-17 23:40:37.276 [INFO][5718] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:37.289181 containerd[1456]: 2026-04-17 23:40:37.282 [WARNING][5718] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" HandleID="k8s-pod-network.6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" Workload="172--238--189--76-k8s-coredns--66bc5c9577--lh2x8-eth0" Apr 17 23:40:37.289181 containerd[1456]: 2026-04-17 23:40:37.282 [INFO][5718] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" HandleID="k8s-pod-network.6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" Workload="172--238--189--76-k8s-coredns--66bc5c9577--lh2x8-eth0" Apr 17 23:40:37.289181 containerd[1456]: 2026-04-17 23:40:37.283 [INFO][5718] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:37.289181 containerd[1456]: 2026-04-17 23:40:37.286 [INFO][5711] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" Apr 17 23:40:37.289676 containerd[1456]: time="2026-04-17T23:40:37.289220142Z" level=info msg="TearDown network for sandbox \"6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416\" successfully" Apr 17 23:40:37.289676 containerd[1456]: time="2026-04-17T23:40:37.289248142Z" level=info msg="StopPodSandbox for \"6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416\" returns successfully" Apr 17 23:40:37.289890 containerd[1456]: time="2026-04-17T23:40:37.289870882Z" level=info msg="RemovePodSandbox for \"6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416\"" Apr 17 23:40:37.289995 containerd[1456]: time="2026-04-17T23:40:37.289898442Z" level=info msg="Forcibly stopping sandbox \"6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416\"" Apr 17 23:40:37.358420 containerd[1456]: 2026-04-17 23:40:37.323 [WARNING][5733] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-coredns--66bc5c9577--lh2x8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"98a1a605-7c5c-4f5c-801d-322ef1144e09", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"ed287fbb525078d5629b84757676a08856533305147479f8af0557bb00bb2558", Pod:"coredns-66bc5c9577-lh2x8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali171c3f9e0ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:37.358420 containerd[1456]: 2026-04-17 23:40:37.323 [INFO][5733] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" Apr 17 23:40:37.358420 containerd[1456]: 2026-04-17 23:40:37.324 [INFO][5733] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" iface="eth0" netns="" Apr 17 23:40:37.358420 containerd[1456]: 2026-04-17 23:40:37.324 [INFO][5733] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" Apr 17 23:40:37.358420 containerd[1456]: 2026-04-17 23:40:37.324 [INFO][5733] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" Apr 17 23:40:37.358420 containerd[1456]: 2026-04-17 23:40:37.345 [INFO][5740] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" HandleID="k8s-pod-network.6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" Workload="172--238--189--76-k8s-coredns--66bc5c9577--lh2x8-eth0" Apr 17 23:40:37.358420 containerd[1456]: 2026-04-17 23:40:37.345 [INFO][5740] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:37.358420 containerd[1456]: 2026-04-17 23:40:37.346 [INFO][5740] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:37.358420 containerd[1456]: 2026-04-17 23:40:37.351 [WARNING][5740] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" HandleID="k8s-pod-network.6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" Workload="172--238--189--76-k8s-coredns--66bc5c9577--lh2x8-eth0" Apr 17 23:40:37.358420 containerd[1456]: 2026-04-17 23:40:37.351 [INFO][5740] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" HandleID="k8s-pod-network.6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" Workload="172--238--189--76-k8s-coredns--66bc5c9577--lh2x8-eth0" Apr 17 23:40:37.358420 containerd[1456]: 2026-04-17 23:40:37.353 [INFO][5740] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:37.358420 containerd[1456]: 2026-04-17 23:40:37.355 [INFO][5733] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416" Apr 17 23:40:37.358420 containerd[1456]: time="2026-04-17T23:40:37.358375564Z" level=info msg="TearDown network for sandbox \"6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416\" successfully" Apr 17 23:40:37.368959 containerd[1456]: time="2026-04-17T23:40:37.368882893Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:40:37.368959 containerd[1456]: time="2026-04-17T23:40:37.368945593Z" level=info msg="RemovePodSandbox \"6df792b02e813b3e5b8df7ca42daa678e05de178a9046ca37ab7a0e640259416\" returns successfully" Apr 17 23:40:37.369737 containerd[1456]: time="2026-04-17T23:40:37.369545063Z" level=info msg="StopPodSandbox for \"6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f\"" Apr 17 23:40:37.437825 containerd[1456]: 2026-04-17 23:40:37.405 [WARNING][5754] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-calico--apiserver--747d4d9564--87p66-eth0", GenerateName:"calico-apiserver-747d4d9564-", Namespace:"calico-system", SelfLink:"", UID:"fe4b5225-389a-4c5f-90d9-d343b520891b", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747d4d9564", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"a1cd4d089791bde480b9dead5c623707de7b97ec6125452b0dc122d51046ded2", Pod:"calico-apiserver-747d4d9564-87p66", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibeb2ced4cd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:37.437825 containerd[1456]: 2026-04-17 23:40:37.405 [INFO][5754] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" Apr 17 23:40:37.437825 containerd[1456]: 2026-04-17 23:40:37.406 [INFO][5754] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" iface="eth0" netns="" Apr 17 23:40:37.437825 containerd[1456]: 2026-04-17 23:40:37.406 [INFO][5754] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" Apr 17 23:40:37.437825 containerd[1456]: 2026-04-17 23:40:37.406 [INFO][5754] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" Apr 17 23:40:37.437825 containerd[1456]: 2026-04-17 23:40:37.426 [INFO][5761] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" HandleID="k8s-pod-network.6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" Workload="172--238--189--76-k8s-calico--apiserver--747d4d9564--87p66-eth0" Apr 17 23:40:37.437825 containerd[1456]: 2026-04-17 23:40:37.426 [INFO][5761] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:37.437825 containerd[1456]: 2026-04-17 23:40:37.426 [INFO][5761] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:37.437825 containerd[1456]: 2026-04-17 23:40:37.431 [WARNING][5761] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" HandleID="k8s-pod-network.6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" Workload="172--238--189--76-k8s-calico--apiserver--747d4d9564--87p66-eth0" Apr 17 23:40:37.437825 containerd[1456]: 2026-04-17 23:40:37.431 [INFO][5761] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" HandleID="k8s-pod-network.6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" Workload="172--238--189--76-k8s-calico--apiserver--747d4d9564--87p66-eth0" Apr 17 23:40:37.437825 containerd[1456]: 2026-04-17 23:40:37.433 [INFO][5761] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:37.437825 containerd[1456]: 2026-04-17 23:40:37.435 [INFO][5754] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" Apr 17 23:40:37.438195 containerd[1456]: time="2026-04-17T23:40:37.437866635Z" level=info msg="TearDown network for sandbox \"6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f\" successfully" Apr 17 23:40:37.438195 containerd[1456]: time="2026-04-17T23:40:37.437904765Z" level=info msg="StopPodSandbox for \"6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f\" returns successfully" Apr 17 23:40:37.438676 containerd[1456]: time="2026-04-17T23:40:37.438391625Z" level=info msg="RemovePodSandbox for \"6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f\"" Apr 17 23:40:37.438676 containerd[1456]: time="2026-04-17T23:40:37.438429585Z" level=info msg="Forcibly stopping sandbox \"6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f\"" Apr 17 23:40:37.502953 containerd[1456]: 2026-04-17 23:40:37.470 [WARNING][5775] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-calico--apiserver--747d4d9564--87p66-eth0", GenerateName:"calico-apiserver-747d4d9564-", Namespace:"calico-system", SelfLink:"", UID:"fe4b5225-389a-4c5f-90d9-d343b520891b", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747d4d9564", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"a1cd4d089791bde480b9dead5c623707de7b97ec6125452b0dc122d51046ded2", Pod:"calico-apiserver-747d4d9564-87p66", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibeb2ced4cd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:37.502953 containerd[1456]: 2026-04-17 23:40:37.470 [INFO][5775] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" Apr 17 23:40:37.502953 containerd[1456]: 2026-04-17 23:40:37.470 [INFO][5775] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" iface="eth0" netns="" Apr 17 23:40:37.502953 containerd[1456]: 2026-04-17 23:40:37.470 [INFO][5775] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" Apr 17 23:40:37.502953 containerd[1456]: 2026-04-17 23:40:37.470 [INFO][5775] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" Apr 17 23:40:37.502953 containerd[1456]: 2026-04-17 23:40:37.491 [INFO][5782] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" HandleID="k8s-pod-network.6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" Workload="172--238--189--76-k8s-calico--apiserver--747d4d9564--87p66-eth0" Apr 17 23:40:37.502953 containerd[1456]: 2026-04-17 23:40:37.491 [INFO][5782] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:37.502953 containerd[1456]: 2026-04-17 23:40:37.491 [INFO][5782] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:37.502953 containerd[1456]: 2026-04-17 23:40:37.496 [WARNING][5782] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" HandleID="k8s-pod-network.6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" Workload="172--238--189--76-k8s-calico--apiserver--747d4d9564--87p66-eth0" Apr 17 23:40:37.502953 containerd[1456]: 2026-04-17 23:40:37.496 [INFO][5782] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" HandleID="k8s-pod-network.6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" Workload="172--238--189--76-k8s-calico--apiserver--747d4d9564--87p66-eth0" Apr 17 23:40:37.502953 containerd[1456]: 2026-04-17 23:40:37.497 [INFO][5782] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:37.502953 containerd[1456]: 2026-04-17 23:40:37.500 [INFO][5775] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f" Apr 17 23:40:37.503317 containerd[1456]: time="2026-04-17T23:40:37.502996427Z" level=info msg="TearDown network for sandbox \"6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f\" successfully" Apr 17 23:40:37.506403 containerd[1456]: time="2026-04-17T23:40:37.506372197Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:40:37.506597 containerd[1456]: time="2026-04-17T23:40:37.506417517Z" level=info msg="RemovePodSandbox \"6deeefc925f4d5b0fbb6e6e1c52c6a92d94a1b6a24e5bd873a4dc844704fe46f\" returns successfully" Apr 17 23:40:37.506983 containerd[1456]: time="2026-04-17T23:40:37.506944837Z" level=info msg="StopPodSandbox for \"3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6\"" Apr 17 23:40:37.577408 containerd[1456]: 2026-04-17 23:40:37.543 [WARNING][5796] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-coredns--66bc5c9577--2kcx5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3b2079f5-4ea9-4796-8181-3d79d0da7db2", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"3ec1dbb8c66e0bcc875a96e882e365c51dca4555c39237c1581eeb76ce66eeb2", Pod:"coredns-66bc5c9577-2kcx5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid1358daf144", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:37.577408 containerd[1456]: 2026-04-17 23:40:37.544 [INFO][5796] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" Apr 17 23:40:37.577408 containerd[1456]: 2026-04-17 23:40:37.544 [INFO][5796] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" iface="eth0" netns="" Apr 17 23:40:37.577408 containerd[1456]: 2026-04-17 23:40:37.544 [INFO][5796] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" Apr 17 23:40:37.577408 containerd[1456]: 2026-04-17 23:40:37.544 [INFO][5796] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" Apr 17 23:40:37.577408 containerd[1456]: 2026-04-17 23:40:37.565 [INFO][5803] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" HandleID="k8s-pod-network.3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" Workload="172--238--189--76-k8s-coredns--66bc5c9577--2kcx5-eth0" Apr 17 23:40:37.577408 containerd[1456]: 2026-04-17 23:40:37.565 [INFO][5803] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:37.577408 containerd[1456]: 2026-04-17 23:40:37.565 [INFO][5803] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:37.577408 containerd[1456]: 2026-04-17 23:40:37.571 [WARNING][5803] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" HandleID="k8s-pod-network.3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" Workload="172--238--189--76-k8s-coredns--66bc5c9577--2kcx5-eth0" Apr 17 23:40:37.577408 containerd[1456]: 2026-04-17 23:40:37.571 [INFO][5803] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" HandleID="k8s-pod-network.3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" Workload="172--238--189--76-k8s-coredns--66bc5c9577--2kcx5-eth0" Apr 17 23:40:37.577408 containerd[1456]: 2026-04-17 23:40:37.572 [INFO][5803] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:37.577408 containerd[1456]: 2026-04-17 23:40:37.575 [INFO][5796] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" Apr 17 23:40:37.577931 containerd[1456]: time="2026-04-17T23:40:37.577441139Z" level=info msg="TearDown network for sandbox \"3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6\" successfully" Apr 17 23:40:37.577931 containerd[1456]: time="2026-04-17T23:40:37.577464169Z" level=info msg="StopPodSandbox for \"3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6\" returns successfully" Apr 17 23:40:37.577978 containerd[1456]: time="2026-04-17T23:40:37.577963609Z" level=info msg="RemovePodSandbox for \"3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6\"" Apr 17 23:40:37.578003 containerd[1456]: time="2026-04-17T23:40:37.577987339Z" level=info msg="Forcibly stopping sandbox \"3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6\"" Apr 17 23:40:37.647811 containerd[1456]: 2026-04-17 23:40:37.612 [WARNING][5817] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-coredns--66bc5c9577--2kcx5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3b2079f5-4ea9-4796-8181-3d79d0da7db2", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"3ec1dbb8c66e0bcc875a96e882e365c51dca4555c39237c1581eeb76ce66eeb2", Pod:"coredns-66bc5c9577-2kcx5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid1358daf144", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:37.647811 containerd[1456]: 2026-04-17 23:40:37.613 [INFO][5817] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" Apr 17 23:40:37.647811 containerd[1456]: 2026-04-17 23:40:37.613 [INFO][5817] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" iface="eth0" netns="" Apr 17 23:40:37.647811 containerd[1456]: 2026-04-17 23:40:37.613 [INFO][5817] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" Apr 17 23:40:37.647811 containerd[1456]: 2026-04-17 23:40:37.613 [INFO][5817] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" Apr 17 23:40:37.647811 containerd[1456]: 2026-04-17 23:40:37.634 [INFO][5824] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" HandleID="k8s-pod-network.3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" Workload="172--238--189--76-k8s-coredns--66bc5c9577--2kcx5-eth0" Apr 17 23:40:37.647811 containerd[1456]: 2026-04-17 23:40:37.634 [INFO][5824] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:37.647811 containerd[1456]: 2026-04-17 23:40:37.634 [INFO][5824] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:37.647811 containerd[1456]: 2026-04-17 23:40:37.640 [WARNING][5824] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" HandleID="k8s-pod-network.3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" Workload="172--238--189--76-k8s-coredns--66bc5c9577--2kcx5-eth0" Apr 17 23:40:37.647811 containerd[1456]: 2026-04-17 23:40:37.640 [INFO][5824] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" HandleID="k8s-pod-network.3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" Workload="172--238--189--76-k8s-coredns--66bc5c9577--2kcx5-eth0" Apr 17 23:40:37.647811 containerd[1456]: 2026-04-17 23:40:37.641 [INFO][5824] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:37.647811 containerd[1456]: 2026-04-17 23:40:37.643 [INFO][5817] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6" Apr 17 23:40:37.647811 containerd[1456]: time="2026-04-17T23:40:37.646294021Z" level=info msg="TearDown network for sandbox \"3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6\" successfully" Apr 17 23:40:37.650174 containerd[1456]: time="2026-04-17T23:40:37.650137231Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:40:37.650259 containerd[1456]: time="2026-04-17T23:40:37.650204771Z" level=info msg="RemovePodSandbox \"3c3cb362c349be489a110329aea168d0e2721db23d47bf790d2916570b3e88f6\" returns successfully" Apr 17 23:40:37.650777 containerd[1456]: time="2026-04-17T23:40:37.650740541Z" level=info msg="StopPodSandbox for \"251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029\"" Apr 17 23:40:37.730379 containerd[1456]: 2026-04-17 23:40:37.687 [WARNING][5838] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-csi--node--driver--9m4mr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7519db54-398f-4489-8839-90013af059d5", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8", Pod:"csi-node-driver-9m4mr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califfc2c550906", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:37.730379 containerd[1456]: 2026-04-17 23:40:37.688 [INFO][5838] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" Apr 17 23:40:37.730379 containerd[1456]: 2026-04-17 23:40:37.688 [INFO][5838] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" iface="eth0" netns="" Apr 17 23:40:37.730379 containerd[1456]: 2026-04-17 23:40:37.688 [INFO][5838] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" Apr 17 23:40:37.730379 containerd[1456]: 2026-04-17 23:40:37.688 [INFO][5838] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" Apr 17 23:40:37.730379 containerd[1456]: 2026-04-17 23:40:37.713 [INFO][5846] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" HandleID="k8s-pod-network.251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" Workload="172--238--189--76-k8s-csi--node--driver--9m4mr-eth0" Apr 17 23:40:37.730379 containerd[1456]: 2026-04-17 23:40:37.713 [INFO][5846] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:37.730379 containerd[1456]: 2026-04-17 23:40:37.713 [INFO][5846] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:37.730379 containerd[1456]: 2026-04-17 23:40:37.722 [WARNING][5846] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" HandleID="k8s-pod-network.251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" Workload="172--238--189--76-k8s-csi--node--driver--9m4mr-eth0" Apr 17 23:40:37.730379 containerd[1456]: 2026-04-17 23:40:37.722 [INFO][5846] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" HandleID="k8s-pod-network.251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" Workload="172--238--189--76-k8s-csi--node--driver--9m4mr-eth0" Apr 17 23:40:37.730379 containerd[1456]: 2026-04-17 23:40:37.724 [INFO][5846] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:37.730379 containerd[1456]: 2026-04-17 23:40:37.727 [INFO][5838] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" Apr 17 23:40:37.730968 containerd[1456]: time="2026-04-17T23:40:37.730412941Z" level=info msg="TearDown network for sandbox \"251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029\" successfully" Apr 17 23:40:37.730968 containerd[1456]: time="2026-04-17T23:40:37.730439001Z" level=info msg="StopPodSandbox for \"251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029\" returns successfully" Apr 17 23:40:37.731082 containerd[1456]: time="2026-04-17T23:40:37.731062581Z" level=info msg="RemovePodSandbox for \"251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029\"" Apr 17 23:40:37.731138 containerd[1456]: time="2026-04-17T23:40:37.731087981Z" level=info msg="Forcibly stopping sandbox \"251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029\"" Apr 17 23:40:37.804820 containerd[1456]: 2026-04-17 23:40:37.768 [WARNING][5860] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-csi--node--driver--9m4mr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7519db54-398f-4489-8839-90013af059d5", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"9e4de0900b2998f13eb37de8f68ba527566f57adc1b052a962f2d433edff69a8", Pod:"csi-node-driver-9m4mr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califfc2c550906", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:37.804820 containerd[1456]: 2026-04-17 23:40:37.769 [INFO][5860] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" Apr 17 23:40:37.804820 containerd[1456]: 2026-04-17 23:40:37.769 [INFO][5860] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" iface="eth0" netns="" Apr 17 23:40:37.804820 containerd[1456]: 2026-04-17 23:40:37.769 [INFO][5860] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" Apr 17 23:40:37.804820 containerd[1456]: 2026-04-17 23:40:37.769 [INFO][5860] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" Apr 17 23:40:37.804820 containerd[1456]: 2026-04-17 23:40:37.791 [INFO][5867] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" HandleID="k8s-pod-network.251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" Workload="172--238--189--76-k8s-csi--node--driver--9m4mr-eth0" Apr 17 23:40:37.804820 containerd[1456]: 2026-04-17 23:40:37.792 [INFO][5867] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:37.804820 containerd[1456]: 2026-04-17 23:40:37.792 [INFO][5867] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:37.804820 containerd[1456]: 2026-04-17 23:40:37.797 [WARNING][5867] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" HandleID="k8s-pod-network.251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" Workload="172--238--189--76-k8s-csi--node--driver--9m4mr-eth0" Apr 17 23:40:37.804820 containerd[1456]: 2026-04-17 23:40:37.797 [INFO][5867] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" HandleID="k8s-pod-network.251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" Workload="172--238--189--76-k8s-csi--node--driver--9m4mr-eth0" Apr 17 23:40:37.804820 containerd[1456]: 2026-04-17 23:40:37.799 [INFO][5867] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:37.804820 containerd[1456]: 2026-04-17 23:40:37.802 [INFO][5860] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029" Apr 17 23:40:37.805604 containerd[1456]: time="2026-04-17T23:40:37.804854923Z" level=info msg="TearDown network for sandbox \"251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029\" successfully" Apr 17 23:40:37.808131 containerd[1456]: time="2026-04-17T23:40:37.808091363Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:40:37.808131 containerd[1456]: time="2026-04-17T23:40:37.808165603Z" level=info msg="RemovePodSandbox \"251dab7827633df03eadca23011fecea115d7b59a2bbecc0568d89a846395029\" returns successfully" Apr 17 23:40:37.808777 containerd[1456]: time="2026-04-17T23:40:37.808754703Z" level=info msg="StopPodSandbox for \"5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914\"" Apr 17 23:40:37.877842 containerd[1456]: 2026-04-17 23:40:37.841 [WARNING][5882] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-calico--kube--controllers--5d5d6df97d--qjfqb-eth0", GenerateName:"calico-kube-controllers-5d5d6df97d-", Namespace:"calico-system", SelfLink:"", UID:"b465cf39-1a7a-43c8-8b20-c06a445d067b", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d5d6df97d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"2ca52704a28ec80e90353ecffeabb9fe0b1393210ed4c0f1dd4fee73f627e494", Pod:"calico-kube-controllers-5d5d6df97d-qjfqb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali27081aead17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:37.877842 containerd[1456]: 2026-04-17 23:40:37.841 [INFO][5882] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" Apr 17 23:40:37.877842 containerd[1456]: 2026-04-17 23:40:37.841 [INFO][5882] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" iface="eth0" netns="" Apr 17 23:40:37.877842 containerd[1456]: 2026-04-17 23:40:37.841 [INFO][5882] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" Apr 17 23:40:37.877842 containerd[1456]: 2026-04-17 23:40:37.841 [INFO][5882] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" Apr 17 23:40:37.877842 containerd[1456]: 2026-04-17 23:40:37.865 [INFO][5889] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" HandleID="k8s-pod-network.5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" Workload="172--238--189--76-k8s-calico--kube--controllers--5d5d6df97d--qjfqb-eth0" Apr 17 23:40:37.877842 containerd[1456]: 2026-04-17 23:40:37.865 [INFO][5889] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:37.877842 containerd[1456]: 2026-04-17 23:40:37.865 [INFO][5889] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:37.877842 containerd[1456]: 2026-04-17 23:40:37.872 [WARNING][5889] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" HandleID="k8s-pod-network.5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" Workload="172--238--189--76-k8s-calico--kube--controllers--5d5d6df97d--qjfqb-eth0" Apr 17 23:40:37.877842 containerd[1456]: 2026-04-17 23:40:37.872 [INFO][5889] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" HandleID="k8s-pod-network.5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" Workload="172--238--189--76-k8s-calico--kube--controllers--5d5d6df97d--qjfqb-eth0" Apr 17 23:40:37.877842 containerd[1456]: 2026-04-17 23:40:37.873 [INFO][5889] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:37.877842 containerd[1456]: 2026-04-17 23:40:37.875 [INFO][5882] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" Apr 17 23:40:37.878330 containerd[1456]: time="2026-04-17T23:40:37.877882365Z" level=info msg="TearDown network for sandbox \"5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914\" successfully" Apr 17 23:40:37.878330 containerd[1456]: time="2026-04-17T23:40:37.877905535Z" level=info msg="StopPodSandbox for \"5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914\" returns successfully" Apr 17 23:40:37.878330 containerd[1456]: time="2026-04-17T23:40:37.878292934Z" level=info msg="RemovePodSandbox for \"5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914\"" Apr 17 23:40:37.878330 containerd[1456]: time="2026-04-17T23:40:37.878313694Z" level=info msg="Forcibly stopping sandbox \"5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914\"" Apr 17 23:40:37.950814 containerd[1456]: 2026-04-17 23:40:37.915 [WARNING][5903] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--189--76-k8s-calico--kube--controllers--5d5d6df97d--qjfqb-eth0", GenerateName:"calico-kube-controllers-5d5d6df97d-", Namespace:"calico-system", SelfLink:"", UID:"b465cf39-1a7a-43c8-8b20-c06a445d067b", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d5d6df97d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-189-76", ContainerID:"2ca52704a28ec80e90353ecffeabb9fe0b1393210ed4c0f1dd4fee73f627e494", Pod:"calico-kube-controllers-5d5d6df97d-qjfqb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali27081aead17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:37.950814 containerd[1456]: 2026-04-17 23:40:37.915 [INFO][5903] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" Apr 17 23:40:37.950814 containerd[1456]: 2026-04-17 23:40:37.915 [INFO][5903] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" iface="eth0" netns="" Apr 17 23:40:37.950814 containerd[1456]: 2026-04-17 23:40:37.915 [INFO][5903] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" Apr 17 23:40:37.950814 containerd[1456]: 2026-04-17 23:40:37.915 [INFO][5903] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" Apr 17 23:40:37.950814 containerd[1456]: 2026-04-17 23:40:37.937 [INFO][5910] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" HandleID="k8s-pod-network.5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" Workload="172--238--189--76-k8s-calico--kube--controllers--5d5d6df97d--qjfqb-eth0" Apr 17 23:40:37.950814 containerd[1456]: 2026-04-17 23:40:37.937 [INFO][5910] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:37.950814 containerd[1456]: 2026-04-17 23:40:37.937 [INFO][5910] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:37.950814 containerd[1456]: 2026-04-17 23:40:37.943 [WARNING][5910] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" HandleID="k8s-pod-network.5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" Workload="172--238--189--76-k8s-calico--kube--controllers--5d5d6df97d--qjfqb-eth0" Apr 17 23:40:37.950814 containerd[1456]: 2026-04-17 23:40:37.943 [INFO][5910] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" HandleID="k8s-pod-network.5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" Workload="172--238--189--76-k8s-calico--kube--controllers--5d5d6df97d--qjfqb-eth0" Apr 17 23:40:37.950814 containerd[1456]: 2026-04-17 23:40:37.944 [INFO][5910] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:37.950814 containerd[1456]: 2026-04-17 23:40:37.946 [INFO][5903] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914" Apr 17 23:40:37.950814 containerd[1456]: time="2026-04-17T23:40:37.948983416Z" level=info msg="TearDown network for sandbox \"5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914\" successfully" Apr 17 23:40:37.952536 containerd[1456]: time="2026-04-17T23:40:37.952513366Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:40:37.952644 containerd[1456]: time="2026-04-17T23:40:37.952628366Z" level=info msg="RemovePodSandbox \"5a13e16812d581b12a7cb0067c63cff7463ce548941a22eaca3ad14368de3914\" returns successfully" Apr 17 23:40:51.764908 kubelet[2540]: E0417 23:40:51.764871 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:40:55.765040 kubelet[2540]: E0417 23:40:55.765005 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:41:06.765323 kubelet[2540]: E0417 23:41:06.764878 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:41:08.716737 systemd[1]: run-containerd-runc-k8s.io-091bb9b76b32612ca0cdbad784ad26d50a3fc1766f3a5848bc91509bfe68ec97-runc.f4J1oG.mount: Deactivated successfully. Apr 17 23:41:11.765636 kubelet[2540]: E0417 23:41:11.765446 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:41:20.765837 kubelet[2540]: E0417 23:41:20.765092 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:41:23.765172 kubelet[2540]: E0417 23:41:23.764857 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:41:33.768436 kubelet[2540]: E0417 23:41:33.768340 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:41:38.716515 systemd[1]: run-containerd-runc-k8s.io-091bb9b76b32612ca0cdbad784ad26d50a3fc1766f3a5848bc91509bfe68ec97-runc.pg362C.mount: Deactivated successfully. Apr 17 23:41:45.023899 systemd[1]: Started sshd@7-172.238.189.76:22-50.85.169.122:55092.service - OpenSSH per-connection server daemon (50.85.169.122:55092). Apr 17 23:41:45.631320 sshd[6166]: Accepted publickey for core from 50.85.169.122 port 55092 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 17 23:41:45.633932 sshd[6166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:41:45.640556 systemd-logind[1438]: New session 8 of user core. Apr 17 23:41:45.647825 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 23:41:46.175217 sshd[6166]: pam_unix(sshd:session): session closed for user core Apr 17 23:41:46.182473 systemd-logind[1438]: Session 8 logged out. Waiting for processes to exit. Apr 17 23:41:46.183226 systemd[1]: sshd@7-172.238.189.76:22-50.85.169.122:55092.service: Deactivated successfully. Apr 17 23:41:46.188738 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 23:41:46.189948 systemd-logind[1438]: Removed session 8. Apr 17 23:41:49.068007 systemd[1]: run-containerd-runc-k8s.io-ddb86888043b0e5cab9d0573394ec307337b54dc55e4b656b3e6922f38bc451e-runc.03qWDX.mount: Deactivated successfully. Apr 17 23:41:51.282866 systemd[1]: Started sshd@8-172.238.189.76:22-50.85.169.122:60598.service - OpenSSH per-connection server daemon (50.85.169.122:60598). Apr 17 23:41:51.913367 sshd[6246]: Accepted publickey for core from 50.85.169.122 port 60598 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 17 23:41:51.915421 sshd[6246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:41:51.920985 systemd-logind[1438]: New session 9 of user core. Apr 17 23:41:51.927810 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 23:41:52.438411 sshd[6246]: pam_unix(sshd:session): session closed for user core Apr 17 23:41:52.442379 systemd[1]: sshd@8-172.238.189.76:22-50.85.169.122:60598.service: Deactivated successfully. Apr 17 23:41:52.447201 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 23:41:52.449670 systemd-logind[1438]: Session 9 logged out. Waiting for processes to exit. Apr 17 23:41:52.451521 systemd-logind[1438]: Removed session 9. Apr 17 23:41:57.557111 systemd[1]: Started sshd@9-172.238.189.76:22-50.85.169.122:60602.service - OpenSSH per-connection server daemon (50.85.169.122:60602). Apr 17 23:41:58.185728 sshd[6291]: Accepted publickey for core from 50.85.169.122 port 60602 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 17 23:41:58.187007 sshd[6291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:41:58.193375 systemd-logind[1438]: New session 10 of user core. Apr 17 23:41:58.195896 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 23:41:58.707213 sshd[6291]: pam_unix(sshd:session): session closed for user core Apr 17 23:41:58.713655 systemd[1]: sshd@9-172.238.189.76:22-50.85.169.122:60602.service: Deactivated successfully. Apr 17 23:41:58.718306 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 23:41:58.719399 systemd-logind[1438]: Session 10 logged out. Waiting for processes to exit. Apr 17 23:41:58.721131 systemd-logind[1438]: Removed session 10. Apr 17 23:41:58.768093 kubelet[2540]: E0417 23:41:58.768063 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:41:58.831950 systemd[1]: Started sshd@10-172.238.189.76:22-50.85.169.122:60610.service - OpenSSH per-connection server daemon (50.85.169.122:60610). Apr 17 23:41:59.468633 sshd[6324]: Accepted publickey for core from 50.85.169.122 port 60610 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 17 23:41:59.470597 sshd[6324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:41:59.477166 systemd-logind[1438]: New session 11 of user core. Apr 17 23:41:59.484903 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 23:42:00.018537 sshd[6324]: pam_unix(sshd:session): session closed for user core Apr 17 23:42:00.023201 systemd-logind[1438]: Session 11 logged out. Waiting for processes to exit. Apr 17 23:42:00.024480 systemd[1]: sshd@10-172.238.189.76:22-50.85.169.122:60610.service: Deactivated successfully. Apr 17 23:42:00.026992 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 23:42:00.027860 systemd-logind[1438]: Removed session 11. Apr 17 23:42:00.131121 systemd[1]: Started sshd@11-172.238.189.76:22-50.85.169.122:40644.service - OpenSSH per-connection server daemon (50.85.169.122:40644). Apr 17 23:42:00.735742 sshd[6335]: Accepted publickey for core from 50.85.169.122 port 40644 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 17 23:42:00.737447 sshd[6335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:42:00.742645 systemd-logind[1438]: New session 12 of user core. Apr 17 23:42:00.747829 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 23:42:01.225196 sshd[6335]: pam_unix(sshd:session): session closed for user core Apr 17 23:42:01.228652 systemd-logind[1438]: Session 12 logged out. Waiting for processes to exit. Apr 17 23:42:01.229878 systemd[1]: sshd@11-172.238.189.76:22-50.85.169.122:40644.service: Deactivated successfully. Apr 17 23:42:01.232417 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 23:42:01.238237 systemd-logind[1438]: Removed session 12. Apr 17 23:42:02.764552 kubelet[2540]: E0417 23:42:02.764507 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:42:06.336110 systemd[1]: Started sshd@12-172.238.189.76:22-50.85.169.122:40652.service - OpenSSH per-connection server daemon (50.85.169.122:40652). Apr 17 23:42:06.938361 sshd[6348]: Accepted publickey for core from 50.85.169.122 port 40652 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 17 23:42:06.939931 sshd[6348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:42:06.944367 systemd-logind[1438]: New session 13 of user core. Apr 17 23:42:06.947808 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 23:42:07.433289 sshd[6348]: pam_unix(sshd:session): session closed for user core Apr 17 23:42:07.436634 systemd-logind[1438]: Session 13 logged out. Waiting for processes to exit. Apr 17 23:42:07.437422 systemd[1]: sshd@12-172.238.189.76:22-50.85.169.122:40652.service: Deactivated successfully. Apr 17 23:42:07.439393 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 23:42:07.440279 systemd-logind[1438]: Removed session 13. Apr 17 23:42:07.538337 systemd[1]: Started sshd@13-172.238.189.76:22-50.85.169.122:40662.service - OpenSSH per-connection server daemon (50.85.169.122:40662). Apr 17 23:42:08.137063 sshd[6362]: Accepted publickey for core from 50.85.169.122 port 40662 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 17 23:42:08.138619 sshd[6362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:42:08.144559 systemd-logind[1438]: New session 14 of user core. Apr 17 23:42:08.150025 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 23:42:09.067464 sshd[6362]: pam_unix(sshd:session): session closed for user core Apr 17 23:42:09.074048 systemd-logind[1438]: Session 14 logged out. Waiting for processes to exit. Apr 17 23:42:09.076052 systemd[1]: sshd@13-172.238.189.76:22-50.85.169.122:40662.service: Deactivated successfully. Apr 17 23:42:09.079559 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 23:42:09.082484 systemd-logind[1438]: Removed session 14. Apr 17 23:42:09.173917 systemd[1]: Started sshd@14-172.238.189.76:22-50.85.169.122:40674.service - OpenSSH per-connection server daemon (50.85.169.122:40674). Apr 17 23:42:09.779817 sshd[6393]: Accepted publickey for core from 50.85.169.122 port 40674 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 17 23:42:09.782112 sshd[6393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:42:09.787553 systemd-logind[1438]: New session 15 of user core. Apr 17 23:42:09.792838 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 23:42:10.685319 sshd[6393]: pam_unix(sshd:session): session closed for user core Apr 17 23:42:10.689464 systemd-logind[1438]: Session 15 logged out. Waiting for processes to exit. Apr 17 23:42:10.690162 systemd[1]: sshd@14-172.238.189.76:22-50.85.169.122:40674.service: Deactivated successfully. Apr 17 23:42:10.692239 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 23:42:10.693109 systemd-logind[1438]: Removed session 15. Apr 17 23:42:10.802111 systemd[1]: Started sshd@15-172.238.189.76:22-50.85.169.122:42368.service - OpenSSH per-connection server daemon (50.85.169.122:42368). Apr 17 23:42:11.403710 sshd[6417]: Accepted publickey for core from 50.85.169.122 port 42368 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 17 23:42:11.405609 sshd[6417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:42:11.410533 systemd-logind[1438]: New session 16 of user core. Apr 17 23:42:11.415257 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 23:42:12.034197 sshd[6417]: pam_unix(sshd:session): session closed for user core Apr 17 23:42:12.038755 systemd-logind[1438]: Session 16 logged out. Waiting for processes to exit. Apr 17 23:42:12.039742 systemd[1]: sshd@15-172.238.189.76:22-50.85.169.122:42368.service: Deactivated successfully. Apr 17 23:42:12.041547 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 23:42:12.043255 systemd-logind[1438]: Removed session 16. Apr 17 23:42:12.151374 systemd[1]: Started sshd@16-172.238.189.76:22-50.85.169.122:42378.service - OpenSSH per-connection server daemon (50.85.169.122:42378). Apr 17 23:42:12.764731 kubelet[2540]: E0417 23:42:12.764182 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:42:12.780477 sshd[6430]: Accepted publickey for core from 50.85.169.122 port 42378 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 17 23:42:12.782048 sshd[6430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:42:12.786364 systemd-logind[1438]: New session 17 of user core. Apr 17 23:42:12.792837 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 23:42:13.295949 sshd[6430]: pam_unix(sshd:session): session closed for user core Apr 17 23:42:13.301428 systemd[1]: sshd@16-172.238.189.76:22-50.85.169.122:42378.service: Deactivated successfully. Apr 17 23:42:13.304289 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 23:42:13.307037 systemd-logind[1438]: Session 17 logged out. Waiting for processes to exit. Apr 17 23:42:13.309087 systemd-logind[1438]: Removed session 17. Apr 17 23:42:18.402674 systemd[1]: Started sshd@17-172.238.189.76:22-50.85.169.122:42394.service - OpenSSH per-connection server daemon (50.85.169.122:42394). Apr 17 23:42:19.005635 sshd[6468]: Accepted publickey for core from 50.85.169.122 port 42394 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 17 23:42:19.007347 sshd[6468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:42:19.011730 systemd-logind[1438]: New session 18 of user core. Apr 17 23:42:19.017809 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 23:42:19.491876 sshd[6468]: pam_unix(sshd:session): session closed for user core Apr 17 23:42:19.496543 systemd-logind[1438]: Session 18 logged out. Waiting for processes to exit. Apr 17 23:42:19.497432 systemd[1]: sshd@17-172.238.189.76:22-50.85.169.122:42394.service: Deactivated successfully. Apr 17 23:42:19.500333 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 23:42:19.501474 systemd-logind[1438]: Removed session 18. Apr 17 23:42:24.608915 systemd[1]: Started sshd@18-172.238.189.76:22-50.85.169.122:45324.service - OpenSSH per-connection server daemon (50.85.169.122:45324). Apr 17 23:42:25.256660 sshd[6501]: Accepted publickey for core from 50.85.169.122 port 45324 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 17 23:42:25.257913 sshd[6501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:42:25.263535 systemd-logind[1438]: New session 19 of user core. Apr 17 23:42:25.272858 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 23:42:25.764730 kubelet[2540]: E0417 23:42:25.764665 2540 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Apr 17 23:42:25.783725 sshd[6501]: pam_unix(sshd:session): session closed for user core Apr 17 23:42:25.788275 systemd-logind[1438]: Session 19 logged out. Waiting for processes to exit. Apr 17 23:42:25.789260 systemd[1]: sshd@18-172.238.189.76:22-50.85.169.122:45324.service: Deactivated successfully. Apr 17 23:42:25.791321 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 23:42:25.793196 systemd-logind[1438]: Removed session 19.