Apr 17 23:56:23.036224 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 17 22:11:20 -00 2026 Apr 17 23:56:23.036247 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:56:23.036256 kernel: BIOS-provided physical RAM map: Apr 17 23:56:23.036262 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Apr 17 23:56:23.036268 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Apr 17 23:56:23.036277 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 17 23:56:23.036284 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Apr 17 23:56:23.036290 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Apr 17 23:56:23.036296 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 17 23:56:23.036301 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 17 23:56:23.036307 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 17 23:56:23.036313 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 17 23:56:23.036319 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Apr 17 23:56:23.036328 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 17 23:56:23.036335 kernel: NX (Execute Disable) protection: active Apr 17 23:56:23.036341 kernel: APIC: Static calls initialized Apr 17 23:56:23.036348 kernel: SMBIOS 2.8 present. Apr 17 23:56:23.036354 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Apr 17 23:56:23.036361 kernel: Hypervisor detected: KVM Apr 17 23:56:23.036370 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 23:56:23.036377 kernel: kvm-clock: using sched offset of 6269330340 cycles Apr 17 23:56:23.036384 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 23:56:23.036390 kernel: tsc: Detected 2000.000 MHz processor Apr 17 23:56:23.036397 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 23:56:23.036404 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 23:56:23.036410 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Apr 17 23:56:23.036417 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 17 23:56:23.036424 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 23:56:23.036433 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Apr 17 23:56:23.036439 kernel: Using GB pages for direct mapping Apr 17 23:56:23.036445 kernel: ACPI: Early table checksum verification disabled Apr 17 23:56:23.036452 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Apr 17 23:56:23.036458 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:56:23.036465 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:56:23.036471 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:56:23.036478 kernel: ACPI: FACS 0x000000007FFE0000 000040 Apr 17 23:56:23.036484 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:56:23.036493 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:56:23.036500 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:56:23.036507 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:56:23.036517 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Apr 17 23:56:23.036524 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Apr 17 23:56:23.036530 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Apr 17 23:56:23.036540 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Apr 17 23:56:23.036547 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Apr 17 23:56:23.036554 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Apr 17 23:56:23.036561 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Apr 17 23:56:23.036568 kernel: No NUMA configuration found Apr 17 23:56:23.036574 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Apr 17 23:56:23.036581 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] Apr 17 23:56:23.036588 kernel: Zone ranges: Apr 17 23:56:23.036597 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 23:56:23.036604 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 17 23:56:23.036626 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Apr 17 23:56:23.036634 kernel: Movable zone start for each node Apr 17 23:56:23.036641 kernel: Early memory node ranges Apr 17 23:56:23.036648 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 17 23:56:23.036655 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Apr 17 23:56:23.036662 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Apr 17 23:56:23.036668 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Apr 17 23:56:23.036678 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:56:23.036685 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 17 23:56:23.036692 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Apr 17 23:56:23.036699 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 17 23:56:23.036706 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 23:56:23.036712 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 17 23:56:23.036719 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 17 23:56:23.036726 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 23:56:23.036733 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 23:56:23.036742 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 23:56:23.036749 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 23:56:23.036756 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 23:56:23.036763 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 17 23:56:23.036769 kernel: TSC deadline timer available Apr 17 23:56:23.036776 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 17 23:56:23.036783 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 17 23:56:23.036790 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 17 23:56:23.036797 kernel: kvm-guest: setup PV sched yield Apr 17 23:56:23.036804 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 17 23:56:23.036813 kernel: Booting paravirtualized kernel on KVM Apr 17 23:56:23.036820 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 23:56:23.036827 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 17 23:56:23.036834 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 17 23:56:23.036840 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 17 23:56:23.036847 kernel: pcpu-alloc: [0] 0 1 Apr 17 23:56:23.036854 kernel: kvm-guest: PV spinlocks enabled Apr 17 23:56:23.036861 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 23:56:23.036869 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:56:23.036878 kernel: random: crng init done Apr 17 23:56:23.036885 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 23:56:23.036892 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 23:56:23.036898 kernel: Fallback order for Node 0: 0 Apr 17 23:56:23.036905 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Apr 17 23:56:23.036912 kernel: Policy zone: Normal Apr 17 23:56:23.036919 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:56:23.036926 kernel: software IO TLB: area num 2. Apr 17 23:56:23.036935 kernel: Memory: 3966220K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 227292K reserved, 0K cma-reserved) Apr 17 23:56:23.036942 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 17 23:56:23.036949 kernel: ftrace: allocating 37996 entries in 149 pages Apr 17 23:56:23.036956 kernel: ftrace: allocated 149 pages with 4 groups Apr 17 23:56:23.036963 kernel: Dynamic Preempt: voluntary Apr 17 23:56:23.036970 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:56:23.036980 kernel: rcu: RCU event tracing is enabled. Apr 17 23:56:23.036988 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 17 23:56:23.036995 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:56:23.037004 kernel: Rude variant of Tasks RCU enabled. Apr 17 23:56:23.037011 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:56:23.037018 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:56:23.037025 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 17 23:56:23.037032 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 17 23:56:23.037039 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:56:23.037045 kernel: Console: colour VGA+ 80x25 Apr 17 23:56:23.037052 kernel: printk: console [tty0] enabled Apr 17 23:56:23.037059 kernel: printk: console [ttyS0] enabled Apr 17 23:56:23.037068 kernel: ACPI: Core revision 20230628 Apr 17 23:56:23.037075 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 17 23:56:23.037082 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 23:56:23.037089 kernel: x2apic enabled Apr 17 23:56:23.037104 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 23:56:23.037114 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 17 23:56:23.037122 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 17 23:56:23.037129 kernel: kvm-guest: setup PV IPIs Apr 17 23:56:23.037136 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 17 23:56:23.037143 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 17 23:56:23.037150 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Apr 17 23:56:23.037158 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 17 23:56:23.037167 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 17 23:56:23.037175 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 17 23:56:23.037182 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 23:56:23.037189 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 23:56:23.037199 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 23:56:23.037206 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 17 23:56:23.037213 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 17 23:56:23.037220 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 17 23:56:23.037228 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 17 23:56:23.037235 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 17 23:56:23.037243 kernel: active return thunk: srso_alias_return_thunk Apr 17 23:56:23.037250 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 17 23:56:23.037257 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 17 23:56:23.037267 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:56:23.037274 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 23:56:23.037281 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 23:56:23.037288 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 23:56:23.037296 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 17 23:56:23.037303 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 23:56:23.037310 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Apr 17 23:56:23.037317 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Apr 17 23:56:23.037327 kernel: Freeing SMP alternatives memory: 32K Apr 17 23:56:23.037334 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:56:23.037341 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:56:23.037349 kernel: landlock: Up and running. Apr 17 23:56:23.037357 kernel: SELinux: Initializing. Apr 17 23:56:23.037364 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:56:23.037371 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:56:23.037378 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Apr 17 23:56:23.037385 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:56:23.037395 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:56:23.037402 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:56:23.037410 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 17 23:56:23.037417 kernel: ... version: 0 Apr 17 23:56:23.037424 kernel: ... bit width: 48 Apr 17 23:56:23.037431 kernel: ... generic registers: 6 Apr 17 23:56:23.037438 kernel: ... value mask: 0000ffffffffffff Apr 17 23:56:23.037445 kernel: ... max period: 00007fffffffffff Apr 17 23:56:23.037451 kernel: ... fixed-purpose events: 0 Apr 17 23:56:23.037461 kernel: ... event mask: 000000000000003f Apr 17 23:56:23.037467 kernel: signal: max sigframe size: 3376 Apr 17 23:56:23.037474 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:56:23.037481 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:56:23.037488 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:56:23.037494 kernel: smpboot: x86: Booting SMP configuration: Apr 17 23:56:23.037501 kernel: .... node #0, CPUs: #1 Apr 17 23:56:23.037508 kernel: smp: Brought up 1 node, 2 CPUs Apr 17 23:56:23.037514 kernel: smpboot: Max logical packages: 1 Apr 17 23:56:23.037521 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Apr 17 23:56:23.037530 kernel: devtmpfs: initialized Apr 17 23:56:23.037537 kernel: x86/mm: Memory block size: 128MB Apr 17 23:56:23.037544 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:56:23.037551 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 17 23:56:23.037558 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:56:23.037564 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:56:23.037571 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:56:23.037578 kernel: audit: type=2000 audit(1776470181.815:1): state=initialized audit_enabled=0 res=1 Apr 17 23:56:23.037587 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:56:23.037602 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 23:56:23.037653 kernel: cpuidle: using governor menu Apr 17 23:56:23.037667 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:56:23.037674 kernel: dca service started, version 1.12.1 Apr 17 23:56:23.037688 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 17 23:56:23.037695 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 17 23:56:23.037702 kernel: PCI: Using configuration type 1 for base access Apr 17 23:56:23.037709 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 23:56:23.037720 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:56:23.037727 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:56:23.037733 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:56:23.037740 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:56:23.037747 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:56:23.037754 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:56:23.037760 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:56:23.037767 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 23:56:23.037774 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 17 23:56:23.037783 kernel: ACPI: Interpreter enabled Apr 17 23:56:23.037790 kernel: ACPI: PM: (supports S0 S3 S5) Apr 17 23:56:23.037797 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 23:56:23.037804 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 23:56:23.037810 kernel: PCI: Using E820 reservations for host bridge windows Apr 17 23:56:23.037817 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 17 23:56:23.037824 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 23:56:23.042390 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:56:23.042555 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 17 23:56:23.042711 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 17 23:56:23.042722 kernel: PCI host bridge to bus 0000:00 Apr 17 23:56:23.042852 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 23:56:23.042970 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 23:56:23.043084 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 23:56:23.043229 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Apr 17 23:56:23.043354 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 17 23:56:23.043469 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Apr 17 23:56:23.043584 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 23:56:23.043804 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 17 23:56:23.043944 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 17 23:56:23.044072 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 17 23:56:23.044197 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 17 23:56:23.044329 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 17 23:56:23.044453 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 17 23:56:23.044588 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Apr 17 23:56:23.044816 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Apr 17 23:56:23.044947 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 17 23:56:23.045073 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 17 23:56:23.045206 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 17 23:56:23.045340 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 17 23:56:23.045465 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 17 23:56:23.046668 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 17 23:56:23.046812 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 17 23:56:23.046948 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 17 23:56:23.047074 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 17 23:56:23.047214 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 17 23:56:23.047339 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Apr 17 23:56:23.047464 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Apr 17 23:56:23.047598 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 17 23:56:23.050377 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 17 23:56:23.050393 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 23:56:23.050402 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 23:56:23.050415 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 23:56:23.050423 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 23:56:23.050431 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 17 23:56:23.050439 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 17 23:56:23.050446 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 17 23:56:23.050456 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 17 23:56:23.050467 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 17 23:56:23.050475 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 17 23:56:23.050483 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 17 23:56:23.050493 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 17 23:56:23.050501 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 17 23:56:23.050509 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 17 23:56:23.050516 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 17 23:56:23.050524 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 17 23:56:23.050532 kernel: iommu: Default domain type: Translated Apr 17 23:56:23.050539 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 23:56:23.050547 kernel: PCI: Using ACPI for IRQ routing Apr 17 23:56:23.050555 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 23:56:23.050565 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Apr 17 23:56:23.050573 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Apr 17 23:56:23.050738 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 17 23:56:23.050865 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 17 23:56:23.050989 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 17 23:56:23.050999 kernel: vgaarb: loaded Apr 17 23:56:23.051008 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 17 23:56:23.051016 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 17 23:56:23.051023 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 23:56:23.051036 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:56:23.051044 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:56:23.051051 kernel: pnp: PnP ACPI init Apr 17 23:56:23.051188 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 17 23:56:23.051200 kernel: pnp: PnP ACPI: found 5 devices Apr 17 23:56:23.051208 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 23:56:23.051216 kernel: NET: Registered PF_INET protocol family Apr 17 23:56:23.051223 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 23:56:23.051235 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 17 23:56:23.051243 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:56:23.051250 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 23:56:23.051258 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 17 23:56:23.051266 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 17 23:56:23.051274 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:56:23.051282 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:56:23.051290 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:56:23.051298 kernel: NET: Registered PF_XDP protocol family Apr 17 23:56:23.051422 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 23:56:23.051540 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 23:56:23.051708 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 23:56:23.051825 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Apr 17 23:56:23.051941 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 17 23:56:23.052056 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Apr 17 23:56:23.052065 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:56:23.052073 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 17 23:56:23.052085 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Apr 17 23:56:23.052093 kernel: Initialise system trusted keyrings Apr 17 23:56:23.052100 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 17 23:56:23.052108 kernel: Key type asymmetric registered Apr 17 23:56:23.052115 kernel: Asymmetric key parser 'x509' registered Apr 17 23:56:23.052122 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 17 23:56:23.052130 kernel: io scheduler mq-deadline registered Apr 17 23:56:23.052137 kernel: io scheduler kyber registered Apr 17 23:56:23.052144 kernel: io scheduler bfq registered Apr 17 23:56:23.052154 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 23:56:23.052163 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 17 23:56:23.052170 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 17 23:56:23.052178 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:56:23.052185 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 23:56:23.052193 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 23:56:23.052200 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 23:56:23.052207 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 23:56:23.052215 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 17 23:56:23.052347 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 17 23:56:23.052467 kernel: rtc_cmos 00:03: registered as rtc0 Apr 17 23:56:23.052585 kernel: rtc_cmos 00:03: setting system clock to 2026-04-17T23:56:22 UTC (1776470182) Apr 17 23:56:23.052725 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 17 23:56:23.052736 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 17 23:56:23.052744 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:56:23.052751 kernel: Segment Routing with IPv6 Apr 17 23:56:23.052758 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:56:23.052770 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:56:23.052778 kernel: Key type dns_resolver registered Apr 17 23:56:23.052785 kernel: IPI shorthand broadcast: enabled Apr 17 23:56:23.052793 kernel: sched_clock: Marking stable (909005960, 357929660)->(1404921780, -137986160) Apr 17 23:56:23.052800 kernel: registered taskstats version 1 Apr 17 23:56:23.052807 kernel: Loading compiled-in X.509 certificates Apr 17 23:56:23.052815 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 39e9969c7f49062f0fc1d1fb72e8f874436eb94f' Apr 17 23:56:23.052822 kernel: Key type .fscrypt registered Apr 17 23:56:23.052829 kernel: Key type fscrypt-provisioning registered Apr 17 23:56:23.052839 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:56:23.052847 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:56:23.052854 kernel: ima: No architecture policies found Apr 17 23:56:23.052861 kernel: clk: Disabling unused clocks Apr 17 23:56:23.052869 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 17 23:56:23.052876 kernel: Write protecting the kernel read-only data: 36864k Apr 17 23:56:23.052884 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 17 23:56:23.052891 kernel: Run /init as init process Apr 17 23:56:23.052898 kernel: with arguments: Apr 17 23:56:23.052908 kernel: /init Apr 17 23:56:23.052915 kernel: with environment: Apr 17 23:56:23.052923 kernel: HOME=/ Apr 17 23:56:23.052930 kernel: TERM=linux Apr 17 23:56:23.052940 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:56:23.052949 systemd[1]: Detected virtualization kvm. Apr 17 23:56:23.052957 systemd[1]: Detected architecture x86-64. Apr 17 23:56:23.052967 systemd[1]: Running in initrd. Apr 17 23:56:23.052975 systemd[1]: No hostname configured, using default hostname. Apr 17 23:56:23.052982 systemd[1]: Hostname set to . Apr 17 23:56:23.052990 systemd[1]: Initializing machine ID from random generator. Apr 17 23:56:23.052998 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:56:23.053006 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:56:23.053026 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:56:23.053037 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:56:23.053046 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:56:23.053054 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:56:23.053062 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:56:23.053072 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:56:23.053080 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:56:23.053090 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:56:23.053099 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:56:23.053107 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:56:23.053114 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:56:23.053122 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:56:23.053130 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:56:23.053138 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:56:23.053146 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:56:23.053155 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:56:23.053195 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:56:23.053204 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:56:23.053212 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:56:23.053220 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:56:23.053228 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:56:23.053236 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:56:23.053244 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:56:23.053252 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:56:23.053263 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:56:23.053271 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:56:23.053279 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:56:23.053287 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:56:23.053319 systemd-journald[178]: Collecting audit messages is disabled. Apr 17 23:56:23.053342 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:56:23.053353 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:56:23.053361 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:56:23.053372 systemd-journald[178]: Journal started Apr 17 23:56:23.053389 systemd-journald[178]: Runtime Journal (/run/log/journal/b193c1677bcc48739bfb50a47c0c8ed0) is 8.0M, max 78.3M, 70.3M free. Apr 17 23:56:23.039009 systemd-modules-load[179]: Inserted module 'overlay' Apr 17 23:56:23.152882 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:56:23.152914 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:56:23.152930 kernel: Bridge firewalling registered Apr 17 23:56:23.075867 systemd-modules-load[179]: Inserted module 'br_netfilter' Apr 17 23:56:23.155093 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:56:23.156188 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:56:23.163758 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:56:23.165730 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:56:23.170752 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:56:23.183221 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:56:23.187095 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:56:23.211451 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:56:23.213436 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:56:23.214649 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:56:23.221806 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:56:23.227972 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:56:23.231970 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:56:23.240493 dracut-cmdline[210]: dracut-dracut-053 Apr 17 23:56:23.245711 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:56:23.255263 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:56:23.285406 systemd-resolved[211]: Positive Trust Anchors: Apr 17 23:56:23.285429 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:56:23.285475 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:56:23.290403 systemd-resolved[211]: Defaulting to hostname 'linux'. Apr 17 23:56:23.291666 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:56:23.296351 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:56:23.340664 kernel: SCSI subsystem initialized Apr 17 23:56:23.350649 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:56:23.361674 kernel: iscsi: registered transport (tcp) Apr 17 23:56:23.384576 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:56:23.384674 kernel: QLogic iSCSI HBA Driver Apr 17 23:56:23.436359 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:56:23.443860 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:56:23.474975 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:56:23.475047 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:56:23.480651 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:56:23.525653 kernel: raid6: avx2x4 gen() 30196 MB/s Apr 17 23:56:23.543645 kernel: raid6: avx2x2 gen() 25158 MB/s Apr 17 23:56:23.561847 kernel: raid6: avx2x1 gen() 21495 MB/s Apr 17 23:56:23.561913 kernel: raid6: using algorithm avx2x4 gen() 30196 MB/s Apr 17 23:56:23.581965 kernel: raid6: .... xor() 4549 MB/s, rmw enabled Apr 17 23:56:23.582029 kernel: raid6: using avx2x2 recovery algorithm Apr 17 23:56:23.603647 kernel: xor: automatically using best checksumming function avx Apr 17 23:56:23.734651 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:56:23.746024 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:56:23.751782 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:56:23.766182 systemd-udevd[397]: Using default interface naming scheme 'v255'. Apr 17 23:56:23.770776 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:56:23.777762 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:56:23.791810 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Apr 17 23:56:23.820873 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:56:23.826741 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:56:23.895081 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:56:23.902796 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:56:23.918546 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:56:23.924195 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:56:23.927076 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:56:23.928719 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:56:23.937249 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:56:23.951298 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:56:23.989670 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 23:56:24.188639 kernel: libata version 3.00 loaded. Apr 17 23:56:24.193035 kernel: AVX2 version of gcm_enc/dec engaged. Apr 17 23:56:24.193081 kernel: AES CTR mode by8 optimization enabled Apr 17 23:56:24.195656 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:56:24.207271 kernel: ahci 0000:00:1f.2: version 3.0 Apr 17 23:56:24.207484 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 17 23:56:24.207499 kernel: scsi host0: Virtio SCSI HBA Apr 17 23:56:24.195773 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:56:24.255871 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 17 23:56:24.256082 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 17 23:56:24.256238 kernel: scsi host1: ahci Apr 17 23:56:24.256411 kernel: scsi host2: ahci Apr 17 23:56:24.256570 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 17 23:56:24.256598 kernel: scsi host3: ahci Apr 17 23:56:24.196675 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:56:24.261285 kernel: scsi host4: ahci Apr 17 23:56:24.261662 kernel: scsi host5: ahci Apr 17 23:56:24.261820 kernel: scsi host6: ahci Apr 17 23:56:24.197401 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:56:24.282551 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Apr 17 23:56:24.282575 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Apr 17 23:56:24.282594 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Apr 17 23:56:24.282604 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Apr 17 23:56:24.282641 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Apr 17 23:56:24.282652 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Apr 17 23:56:24.197560 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:56:24.208982 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:56:24.289865 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:56:24.388536 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:56:24.393751 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:56:24.408085 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:56:24.581585 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 17 23:56:24.581675 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 17 23:56:24.581689 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 17 23:56:24.584636 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 17 23:56:24.584665 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 17 23:56:24.586636 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 17 23:56:24.614810 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 17 23:56:24.615059 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Apr 17 23:56:24.640057 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 17 23:56:24.642224 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 17 23:56:24.642428 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 17 23:56:24.651355 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:56:24.651382 kernel: GPT:9289727 != 167739391 Apr 17 23:56:24.651396 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:56:24.654034 kernel: GPT:9289727 != 167739391 Apr 17 23:56:24.657052 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:56:24.657072 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:56:24.660396 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 17 23:56:24.698635 kernel: BTRFS: device fsid 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 devid 1 transid 32 /dev/sda3 scanned by (udev-worker) (452) Apr 17 23:56:24.698685 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (468) Apr 17 23:56:24.709943 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 17 23:56:24.717335 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 17 23:56:24.723673 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 17 23:56:24.728036 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 17 23:56:24.730479 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 17 23:56:24.736741 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:56:24.747631 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:56:24.747999 disk-uuid[567]: Primary Header is updated. Apr 17 23:56:24.747999 disk-uuid[567]: Secondary Entries is updated. Apr 17 23:56:24.747999 disk-uuid[567]: Secondary Header is updated. Apr 17 23:56:25.764683 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:56:25.768632 disk-uuid[568]: The operation has completed successfully. Apr 17 23:56:25.813682 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:56:25.813815 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:56:25.823771 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:56:25.829702 sh[585]: Success Apr 17 23:56:25.846646 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 17 23:56:25.898576 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:56:25.915659 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:56:25.916762 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:56:25.935940 kernel: BTRFS info (device dm-0): first mount of filesystem 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 Apr 17 23:56:25.935992 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:56:25.938969 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:56:25.944342 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:56:25.944370 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:56:25.955638 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 17 23:56:25.957395 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:56:25.958915 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:56:25.965841 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:56:25.969760 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:56:25.984058 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:56:25.984128 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:56:25.986793 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:56:25.995920 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:56:25.995959 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:56:26.008095 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:56:26.012269 kernel: BTRFS info (device sda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:56:26.019285 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:56:26.026881 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:56:26.114185 ignition[687]: Ignition 2.19.0 Apr 17 23:56:26.115315 ignition[687]: Stage: fetch-offline Apr 17 23:56:26.115379 ignition[687]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:56:26.115393 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 23:56:26.119018 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:56:26.115519 ignition[687]: parsed url from cmdline: "" Apr 17 23:56:26.115524 ignition[687]: no config URL provided Apr 17 23:56:26.115530 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:56:26.123424 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:56:26.115540 ignition[687]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:56:26.115546 ignition[687]: failed to fetch config: resource requires networking Apr 17 23:56:26.115871 ignition[687]: Ignition finished successfully Apr 17 23:56:26.131919 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:56:26.156446 systemd-networkd[772]: lo: Link UP Apr 17 23:56:26.156460 systemd-networkd[772]: lo: Gained carrier Apr 17 23:56:26.158459 systemd-networkd[772]: Enumeration completed Apr 17 23:56:26.159042 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:56:26.159049 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:56:26.160831 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:56:26.162060 systemd-networkd[772]: eth0: Link UP Apr 17 23:56:26.162066 systemd-networkd[772]: eth0: Gained carrier Apr 17 23:56:26.162085 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:56:26.162928 systemd[1]: Reached target network.target - Network. Apr 17 23:56:26.172894 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 17 23:56:26.188179 ignition[774]: Ignition 2.19.0 Apr 17 23:56:26.188198 ignition[774]: Stage: fetch Apr 17 23:56:26.188370 ignition[774]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:56:26.188384 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 23:56:26.188490 ignition[774]: parsed url from cmdline: "" Apr 17 23:56:26.188495 ignition[774]: no config URL provided Apr 17 23:56:26.188500 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:56:26.188511 ignition[774]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:56:26.188533 ignition[774]: PUT http://169.254.169.254/v1/token: attempt #1 Apr 17 23:56:26.188777 ignition[774]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 17 23:56:26.389073 ignition[774]: PUT http://169.254.169.254/v1/token: attempt #2 Apr 17 23:56:26.389413 ignition[774]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 17 23:56:26.789911 ignition[774]: PUT http://169.254.169.254/v1/token: attempt #3 Apr 17 23:56:26.790086 ignition[774]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 17 23:56:27.012691 systemd-networkd[772]: eth0: DHCPv4 address 172.232.15.112/24, gateway 172.232.15.1 acquired from 23.210.200.66 Apr 17 23:56:27.590972 ignition[774]: PUT http://169.254.169.254/v1/token: attempt #4 Apr 17 23:56:27.672738 ignition[774]: PUT result: OK Apr 17 23:56:27.672794 ignition[774]: GET http://169.254.169.254/v1/user-data: attempt #1 Apr 17 23:56:27.784047 ignition[774]: GET result: OK Apr 17 23:56:27.784143 ignition[774]: parsing config with SHA512: 678ec92b262cc9b3422bbab1298574a693a17c5eaf879a8099f9180fab86ae51d4faf864de079405652479e4ce1c31ab691cdec5bb5dd14aed05590f3ea9fc72 Apr 17 23:56:27.787698 unknown[774]: fetched base config from "system" Apr 17 23:56:27.788370 unknown[774]: fetched base config from "system" Apr 17 23:56:27.788378 unknown[774]: fetched user config from "akamai" Apr 17 23:56:27.788711 ignition[774]: fetch: fetch complete Apr 17 23:56:27.788717 ignition[774]: fetch: fetch passed Apr 17 23:56:27.788765 ignition[774]: Ignition finished successfully Apr 17 23:56:27.795164 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 17 23:56:27.807758 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:56:27.823116 ignition[782]: Ignition 2.19.0 Apr 17 23:56:27.823129 ignition[782]: Stage: kargs Apr 17 23:56:27.823358 ignition[782]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:56:27.823376 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 23:56:27.826047 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:56:27.824289 ignition[782]: kargs: kargs passed Apr 17 23:56:27.824333 ignition[782]: Ignition finished successfully Apr 17 23:56:27.831791 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:56:27.847829 ignition[789]: Ignition 2.19.0 Apr 17 23:56:27.847841 ignition[789]: Stage: disks Apr 17 23:56:27.847992 ignition[789]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:56:27.848004 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 23:56:27.850668 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:56:27.848702 ignition[789]: disks: disks passed Apr 17 23:56:27.874506 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:56:27.848742 ignition[789]: Ignition finished successfully Apr 17 23:56:27.875650 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:56:27.877481 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:56:27.878834 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:56:27.880603 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:56:27.888975 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:56:27.905343 systemd-fsck[797]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 17 23:56:27.909137 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:56:27.914749 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:56:28.017638 kernel: EXT4-fs (sda9): mounted filesystem d3c199f8-8065-4f33-a75b-da2f09d4fc39 r/w with ordered data mode. Quota mode: none. Apr 17 23:56:28.019119 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:56:28.020689 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:56:28.027703 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:56:28.031452 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:56:28.032497 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 23:56:28.032553 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:56:28.032586 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:56:28.049353 systemd-networkd[772]: eth0: Gained IPv6LL Apr 17 23:56:28.071388 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (805) Apr 17 23:56:28.071416 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:56:28.071428 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:56:28.071438 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:56:28.071449 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:56:28.071459 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:56:28.056296 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:56:28.076759 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:56:28.079954 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:56:28.128069 initrd-setup-root[830]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:56:28.134835 initrd-setup-root[837]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:56:28.139895 initrd-setup-root[844]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:56:28.145526 initrd-setup-root[851]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:56:28.237592 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:56:28.249694 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:56:28.255766 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:56:28.262369 kernel: BTRFS info (device sda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:56:28.259171 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:56:28.291352 ignition[919]: INFO : Ignition 2.19.0 Apr 17 23:56:28.291352 ignition[919]: INFO : Stage: mount Apr 17 23:56:28.291352 ignition[919]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:56:28.291352 ignition[919]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 23:56:28.297760 ignition[919]: INFO : mount: mount passed Apr 17 23:56:28.297760 ignition[919]: INFO : Ignition finished successfully Apr 17 23:56:28.296971 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:56:28.299674 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:56:28.308720 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:56:29.023763 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:56:29.038636 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (930) Apr 17 23:56:29.038667 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:56:29.042253 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:56:29.046831 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:56:29.051686 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:56:29.051716 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:56:29.056332 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:56:29.076816 ignition[946]: INFO : Ignition 2.19.0 Apr 17 23:56:29.076816 ignition[946]: INFO : Stage: files Apr 17 23:56:29.078735 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:56:29.078735 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 23:56:29.078735 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:56:29.082203 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:56:29.082203 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:56:29.084448 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:56:29.084448 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:56:29.084448 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:56:29.084392 unknown[946]: wrote ssh authorized keys file for user: core Apr 17 23:56:29.088491 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 17 23:56:29.088491 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 17 23:56:29.088491 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:56:29.088491 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 23:56:29.404424 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 17 23:56:29.465214 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:56:29.465214 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:56:29.468371 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:56:29.468371 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:56:29.468371 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:56:29.468371 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:56:29.468371 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:56:29.468371 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:56:29.468371 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:56:29.468371 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:56:29.468371 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:56:29.468371 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:56:29.468371 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:56:29.468371 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:56:29.468371 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 17 23:56:29.895478 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 17 23:56:30.299492 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:56:30.299492 ignition[946]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 17 23:56:30.324185 ignition[946]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 17 23:56:30.324185 ignition[946]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 17 23:56:30.324185 ignition[946]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 17 23:56:30.324185 ignition[946]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 17 23:56:30.324185 ignition[946]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:56:30.324185 ignition[946]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:56:30.324185 ignition[946]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 17 23:56:30.324185 ignition[946]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Apr 17 23:56:30.324185 ignition[946]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 17 23:56:30.324185 ignition[946]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 17 23:56:30.324185 ignition[946]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Apr 17 23:56:30.324185 ignition[946]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:56:30.324185 ignition[946]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:56:30.324185 ignition[946]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:56:30.324185 ignition[946]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:56:30.324185 ignition[946]: INFO : files: files passed Apr 17 23:56:30.324185 ignition[946]: INFO : Ignition finished successfully Apr 17 23:56:30.305833 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:56:30.370021 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:56:30.375753 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:56:30.390200 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:56:30.391244 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:56:30.395693 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:56:30.397425 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:56:30.398890 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:56:30.401102 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:56:30.403584 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:56:30.410773 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:56:30.437019 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:56:30.437147 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:56:30.438586 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:56:30.440776 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:56:30.442510 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:56:30.448736 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:56:30.464198 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:56:30.471763 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:56:30.481068 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:56:30.482199 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:56:30.484120 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:56:30.485872 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:56:30.486061 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:56:30.488135 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:56:30.489470 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:56:30.491113 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:56:30.492690 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:56:30.494242 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:56:30.495927 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:56:30.497606 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:56:30.499395 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:56:30.501082 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:56:30.502882 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:56:30.504529 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:56:30.504673 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:56:30.506590 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:56:30.507708 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:56:30.509257 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:56:30.509677 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:56:30.510988 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:56:30.511091 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:56:30.513759 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:56:30.513961 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:56:30.515776 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:56:30.515879 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:56:30.524844 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:56:30.525964 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:56:30.526146 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:56:30.528906 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:56:30.532005 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:56:30.532734 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:56:30.534754 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:56:30.534924 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:56:30.551409 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:56:30.551533 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:56:30.559953 ignition[999]: INFO : Ignition 2.19.0 Apr 17 23:56:30.559953 ignition[999]: INFO : Stage: umount Apr 17 23:56:30.563870 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:56:30.563870 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 23:56:30.563870 ignition[999]: INFO : umount: umount passed Apr 17 23:56:30.563870 ignition[999]: INFO : Ignition finished successfully Apr 17 23:56:30.562852 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:56:30.562982 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:56:30.566812 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:56:30.566865 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:56:30.567724 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:56:30.567804 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:56:30.595825 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 17 23:56:30.595907 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 17 23:56:30.597198 systemd[1]: Stopped target network.target - Network. Apr 17 23:56:30.598549 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:56:30.598606 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:56:30.601170 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:56:30.603937 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:56:30.607663 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:56:30.614978 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:56:30.616518 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:56:30.619268 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:56:30.619343 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:56:30.620138 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:56:30.620219 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:56:30.620978 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:56:30.621044 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:56:30.622800 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:56:30.622853 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:56:30.624442 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:56:30.626789 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:56:30.629251 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:56:30.629661 systemd-networkd[772]: eth0: DHCPv6 lease lost Apr 17 23:56:30.631715 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:56:30.631857 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:56:30.633260 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:56:30.633301 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:56:30.638820 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:56:30.640317 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:56:30.640381 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:56:30.643764 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:56:30.654157 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:56:30.654274 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:56:30.656851 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:56:30.656980 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:56:30.664933 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:56:30.665099 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:56:30.669910 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:56:30.669969 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:56:30.670817 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:56:30.670860 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:56:30.672443 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:56:30.672498 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:56:30.674706 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:56:30.674756 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:56:30.676285 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:56:30.676336 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:56:30.677833 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:56:30.677883 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:56:30.685776 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:56:30.687474 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:56:30.687538 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:56:30.689218 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:56:30.689273 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:56:30.690975 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:56:30.691026 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:56:30.692704 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:56:30.692757 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:56:30.694646 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:56:30.694697 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:56:30.696801 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:56:30.697135 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:56:30.698593 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:56:30.698725 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:56:30.701455 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:56:30.707779 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:56:30.718417 systemd[1]: Switching root. Apr 17 23:56:30.752685 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Apr 17 23:56:30.752760 systemd-journald[178]: Journal stopped Apr 17 23:56:23.036224 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 17 22:11:20 -00 2026 Apr 17 23:56:23.036247 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:56:23.036256 kernel: BIOS-provided physical RAM map: Apr 17 23:56:23.036262 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Apr 17 23:56:23.036268 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Apr 17 23:56:23.036277 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 17 23:56:23.036284 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Apr 17 23:56:23.036290 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Apr 17 23:56:23.036296 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 17 23:56:23.036301 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 17 23:56:23.036307 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 17 23:56:23.036313 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 17 23:56:23.036319 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Apr 17 23:56:23.036328 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 17 23:56:23.036335 kernel: NX (Execute Disable) protection: active Apr 17 23:56:23.036341 kernel: APIC: Static calls initialized Apr 17 23:56:23.036348 kernel: SMBIOS 2.8 present. Apr 17 23:56:23.036354 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Apr 17 23:56:23.036361 kernel: Hypervisor detected: KVM Apr 17 23:56:23.036370 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 23:56:23.036377 kernel: kvm-clock: using sched offset of 6269330340 cycles Apr 17 23:56:23.036384 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 23:56:23.036390 kernel: tsc: Detected 2000.000 MHz processor Apr 17 23:56:23.036397 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 23:56:23.036404 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 23:56:23.036410 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Apr 17 23:56:23.036417 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 17 23:56:23.036424 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 23:56:23.036433 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Apr 17 23:56:23.036439 kernel: Using GB pages for direct mapping Apr 17 23:56:23.036445 kernel: ACPI: Early table checksum verification disabled Apr 17 23:56:23.036452 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Apr 17 23:56:23.036458 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:56:23.036465 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:56:23.036471 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:56:23.036478 kernel: ACPI: FACS 0x000000007FFE0000 000040 Apr 17 23:56:23.036484 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:56:23.036493 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:56:23.036500 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:56:23.036507 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:56:23.036517 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Apr 17 23:56:23.036524 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Apr 17 23:56:23.036530 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Apr 17 23:56:23.036540 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Apr 17 23:56:23.036547 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Apr 17 23:56:23.036554 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Apr 17 23:56:23.036561 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Apr 17 23:56:23.036568 kernel: No NUMA configuration found Apr 17 23:56:23.036574 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Apr 17 23:56:23.036581 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] Apr 17 23:56:23.036588 kernel: Zone ranges: Apr 17 23:56:23.036597 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 23:56:23.036604 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 17 23:56:23.036626 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Apr 17 23:56:23.036634 kernel: Movable zone start for each node Apr 17 23:56:23.036641 kernel: Early memory node ranges Apr 17 23:56:23.036648 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 17 23:56:23.036655 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Apr 17 23:56:23.036662 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Apr 17 23:56:23.036668 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Apr 17 23:56:23.036678 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:56:23.036685 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 17 23:56:23.036692 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Apr 17 23:56:23.036699 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 17 23:56:23.036706 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 23:56:23.036712 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 17 23:56:23.036719 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 17 23:56:23.036726 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 23:56:23.036733 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 23:56:23.036742 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 23:56:23.036749 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 23:56:23.036756 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 23:56:23.036763 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 17 23:56:23.036769 kernel: TSC deadline timer available Apr 17 23:56:23.036776 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 17 23:56:23.036783 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 17 23:56:23.036790 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 17 23:56:23.036797 kernel: kvm-guest: setup PV sched yield Apr 17 23:56:23.036804 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 17 23:56:23.036813 kernel: Booting paravirtualized kernel on KVM Apr 17 23:56:23.036820 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 23:56:23.036827 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 17 23:56:23.036834 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 17 23:56:23.036840 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 17 23:56:23.036847 kernel: pcpu-alloc: [0] 0 1 Apr 17 23:56:23.036854 kernel: kvm-guest: PV spinlocks enabled Apr 17 23:56:23.036861 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 23:56:23.036869 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:56:23.036878 kernel: random: crng init done Apr 17 23:56:23.036885 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 23:56:23.036892 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 23:56:23.036898 kernel: Fallback order for Node 0: 0 Apr 17 23:56:23.036905 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Apr 17 23:56:23.036912 kernel: Policy zone: Normal Apr 17 23:56:23.036919 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:56:23.036926 kernel: software IO TLB: area num 2. Apr 17 23:56:23.036935 kernel: Memory: 3966220K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 227292K reserved, 0K cma-reserved) Apr 17 23:56:23.036942 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 17 23:56:23.036949 kernel: ftrace: allocating 37996 entries in 149 pages Apr 17 23:56:23.036956 kernel: ftrace: allocated 149 pages with 4 groups Apr 17 23:56:23.036963 kernel: Dynamic Preempt: voluntary Apr 17 23:56:23.036970 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:56:23.036980 kernel: rcu: RCU event tracing is enabled. Apr 17 23:56:23.036988 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 17 23:56:23.036995 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:56:23.037004 kernel: Rude variant of Tasks RCU enabled. Apr 17 23:56:23.037011 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:56:23.037018 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:56:23.037025 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 17 23:56:23.037032 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 17 23:56:23.037039 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:56:23.037045 kernel: Console: colour VGA+ 80x25 Apr 17 23:56:23.037052 kernel: printk: console [tty0] enabled Apr 17 23:56:23.037059 kernel: printk: console [ttyS0] enabled Apr 17 23:56:23.037068 kernel: ACPI: Core revision 20230628 Apr 17 23:56:23.037075 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 17 23:56:23.037082 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 23:56:23.037089 kernel: x2apic enabled Apr 17 23:56:23.037104 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 23:56:23.037114 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 17 23:56:23.037122 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 17 23:56:23.037129 kernel: kvm-guest: setup PV IPIs Apr 17 23:56:23.037136 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 17 23:56:23.037143 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 17 23:56:23.037150 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Apr 17 23:56:23.037158 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 17 23:56:23.037167 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 17 23:56:23.037175 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 17 23:56:23.037182 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 23:56:23.037189 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 23:56:23.037199 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 23:56:23.037206 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 17 23:56:23.037213 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 17 23:56:23.037220 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 17 23:56:23.037228 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 17 23:56:23.037235 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 17 23:56:23.037243 kernel: active return thunk: srso_alias_return_thunk Apr 17 23:56:23.037250 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 17 23:56:23.037257 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 17 23:56:23.037267 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:56:23.037274 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 23:56:23.037281 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 23:56:23.037288 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 23:56:23.037296 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 17 23:56:23.037303 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 23:56:23.037310 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Apr 17 23:56:23.037317 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Apr 17 23:56:23.037327 kernel: Freeing SMP alternatives memory: 32K Apr 17 23:56:23.037334 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:56:23.037341 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:56:23.037349 kernel: landlock: Up and running. Apr 17 23:56:23.037357 kernel: SELinux: Initializing. Apr 17 23:56:23.037364 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:56:23.037371 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:56:23.037378 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Apr 17 23:56:23.037385 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:56:23.037395 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:56:23.037402 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:56:23.037410 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 17 23:56:23.037417 kernel: ... version: 0 Apr 17 23:56:23.037424 kernel: ... bit width: 48 Apr 17 23:56:23.037431 kernel: ... generic registers: 6 Apr 17 23:56:23.037438 kernel: ... value mask: 0000ffffffffffff Apr 17 23:56:23.037445 kernel: ... max period: 00007fffffffffff Apr 17 23:56:23.037451 kernel: ... fixed-purpose events: 0 Apr 17 23:56:23.037461 kernel: ... event mask: 000000000000003f Apr 17 23:56:23.037467 kernel: signal: max sigframe size: 3376 Apr 17 23:56:23.037474 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:56:23.037481 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:56:23.037488 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:56:23.037494 kernel: smpboot: x86: Booting SMP configuration: Apr 17 23:56:23.037501 kernel: .... node #0, CPUs: #1 Apr 17 23:56:23.037508 kernel: smp: Brought up 1 node, 2 CPUs Apr 17 23:56:23.037514 kernel: smpboot: Max logical packages: 1 Apr 17 23:56:23.037521 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Apr 17 23:56:23.037530 kernel: devtmpfs: initialized Apr 17 23:56:23.037537 kernel: x86/mm: Memory block size: 128MB Apr 17 23:56:23.037544 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:56:23.037551 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 17 23:56:23.037558 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:56:23.037564 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:56:23.037571 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:56:23.037578 kernel: audit: type=2000 audit(1776470181.815:1): state=initialized audit_enabled=0 res=1 Apr 17 23:56:23.037587 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:56:23.037602 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 23:56:23.037653 kernel: cpuidle: using governor menu Apr 17 23:56:23.037667 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:56:23.037674 kernel: dca service started, version 1.12.1 Apr 17 23:56:23.037688 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 17 23:56:23.037695 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 17 23:56:23.037702 kernel: PCI: Using configuration type 1 for base access Apr 17 23:56:23.037709 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 23:56:23.037720 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:56:23.037727 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:56:23.037733 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:56:23.037740 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:56:23.037747 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:56:23.037754 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:56:23.037760 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:56:23.037767 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 23:56:23.037774 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 17 23:56:23.037783 kernel: ACPI: Interpreter enabled Apr 17 23:56:23.037790 kernel: ACPI: PM: (supports S0 S3 S5) Apr 17 23:56:23.037797 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 23:56:23.037804 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 23:56:23.037810 kernel: PCI: Using E820 reservations for host bridge windows Apr 17 23:56:23.037817 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 17 23:56:23.037824 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 23:56:23.042390 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:56:23.042555 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 17 23:56:23.042711 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 17 23:56:23.042722 kernel: PCI host bridge to bus 0000:00 Apr 17 23:56:23.042852 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 23:56:23.042970 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 23:56:23.043084 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 23:56:23.043229 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Apr 17 23:56:23.043354 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 17 23:56:23.043469 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Apr 17 23:56:23.043584 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 23:56:23.043804 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 17 23:56:23.043944 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 17 23:56:23.044072 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 17 23:56:23.044197 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 17 23:56:23.044329 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 17 23:56:23.044453 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 17 23:56:23.044588 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Apr 17 23:56:23.044816 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Apr 17 23:56:23.044947 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 17 23:56:23.045073 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 17 23:56:23.045206 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 17 23:56:23.045340 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 17 23:56:23.045465 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 17 23:56:23.046668 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 17 23:56:23.046812 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 17 23:56:23.046948 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 17 23:56:23.047074 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 17 23:56:23.047214 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 17 23:56:23.047339 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Apr 17 23:56:23.047464 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Apr 17 23:56:23.047598 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 17 23:56:23.050377 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 17 23:56:23.050393 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 23:56:23.050402 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 23:56:23.050415 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 23:56:23.050423 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 23:56:23.050431 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 17 23:56:23.050439 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 17 23:56:23.050446 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 17 23:56:23.050456 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 17 23:56:23.050467 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 17 23:56:23.050475 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 17 23:56:23.050483 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 17 23:56:23.050493 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 17 23:56:23.050501 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 17 23:56:23.050509 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 17 23:56:23.050516 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 17 23:56:23.050524 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 17 23:56:23.050532 kernel: iommu: Default domain type: Translated Apr 17 23:56:23.050539 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 23:56:23.050547 kernel: PCI: Using ACPI for IRQ routing Apr 17 23:56:23.050555 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 23:56:23.050565 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Apr 17 23:56:23.050573 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Apr 17 23:56:23.050738 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 17 23:56:23.050865 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 17 23:56:23.050989 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 17 23:56:23.050999 kernel: vgaarb: loaded Apr 17 23:56:23.051008 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 17 23:56:23.051016 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 17 23:56:23.051023 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 23:56:23.051036 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:56:23.051044 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:56:23.051051 kernel: pnp: PnP ACPI init Apr 17 23:56:23.051188 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 17 23:56:23.051200 kernel: pnp: PnP ACPI: found 5 devices Apr 17 23:56:23.051208 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 23:56:23.051216 kernel: NET: Registered PF_INET protocol family Apr 17 23:56:23.051223 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 23:56:23.051235 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 17 23:56:23.051243 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:56:23.051250 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 23:56:23.051258 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 17 23:56:23.051266 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 17 23:56:23.051274 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:56:23.051282 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:56:23.051290 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:56:23.051298 kernel: NET: Registered PF_XDP protocol family Apr 17 23:56:23.051422 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 23:56:23.051540 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 23:56:23.051708 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 23:56:23.051825 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Apr 17 23:56:23.051941 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 17 23:56:23.052056 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Apr 17 23:56:23.052065 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:56:23.052073 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 17 23:56:23.052085 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Apr 17 23:56:23.052093 kernel: Initialise system trusted keyrings Apr 17 23:56:23.052100 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 17 23:56:23.052108 kernel: Key type asymmetric registered Apr 17 23:56:23.052115 kernel: Asymmetric key parser 'x509' registered Apr 17 23:56:23.052122 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 17 23:56:23.052130 kernel: io scheduler mq-deadline registered Apr 17 23:56:23.052137 kernel: io scheduler kyber registered Apr 17 23:56:23.052144 kernel: io scheduler bfq registered Apr 17 23:56:23.052154 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 23:56:23.052163 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 17 23:56:23.052170 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 17 23:56:23.052178 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:56:23.052185 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 23:56:23.052193 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 23:56:23.052200 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 23:56:23.052207 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 23:56:23.052215 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 17 23:56:23.052347 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 17 23:56:23.052467 kernel: rtc_cmos 00:03: registered as rtc0 Apr 17 23:56:23.052585 kernel: rtc_cmos 00:03: setting system clock to 2026-04-17T23:56:22 UTC (1776470182) Apr 17 23:56:23.052725 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 17 23:56:23.052736 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 17 23:56:23.052744 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:56:23.052751 kernel: Segment Routing with IPv6 Apr 17 23:56:23.052758 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:56:23.052770 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:56:23.052778 kernel: Key type dns_resolver registered Apr 17 23:56:23.052785 kernel: IPI shorthand broadcast: enabled Apr 17 23:56:23.052793 kernel: sched_clock: Marking stable (909005960, 357929660)->(1404921780, -137986160) Apr 17 23:56:23.052800 kernel: registered taskstats version 1 Apr 17 23:56:23.052807 kernel: Loading compiled-in X.509 certificates Apr 17 23:56:23.052815 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 39e9969c7f49062f0fc1d1fb72e8f874436eb94f' Apr 17 23:56:23.052822 kernel: Key type .fscrypt registered Apr 17 23:56:23.052829 kernel: Key type fscrypt-provisioning registered Apr 17 23:56:23.052839 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:56:23.052847 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:56:23.052854 kernel: ima: No architecture policies found Apr 17 23:56:23.052861 kernel: clk: Disabling unused clocks Apr 17 23:56:23.052869 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 17 23:56:23.052876 kernel: Write protecting the kernel read-only data: 36864k Apr 17 23:56:23.052884 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 17 23:56:23.052891 kernel: Run /init as init process Apr 17 23:56:23.052898 kernel: with arguments: Apr 17 23:56:23.052908 kernel: /init Apr 17 23:56:23.052915 kernel: with environment: Apr 17 23:56:23.052923 kernel: HOME=/ Apr 17 23:56:23.052930 kernel: TERM=linux Apr 17 23:56:23.052940 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:56:23.052949 systemd[1]: Detected virtualization kvm. Apr 17 23:56:23.052957 systemd[1]: Detected architecture x86-64. Apr 17 23:56:23.052967 systemd[1]: Running in initrd. Apr 17 23:56:23.052975 systemd[1]: No hostname configured, using default hostname. Apr 17 23:56:23.052982 systemd[1]: Hostname set to . Apr 17 23:56:23.052990 systemd[1]: Initializing machine ID from random generator. Apr 17 23:56:23.052998 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:56:23.053006 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:56:23.053026 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:56:23.053037 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:56:23.053046 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:56:23.053054 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:56:23.053062 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:56:23.053072 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:56:23.053080 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:56:23.053090 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:56:23.053099 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:56:23.053107 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:56:23.053114 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:56:23.053122 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:56:23.053130 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:56:23.053138 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:56:23.053146 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:56:23.053155 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:56:23.053195 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:56:23.053204 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:56:23.053212 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:56:23.053220 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:56:23.053228 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:56:23.053236 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:56:23.053244 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:56:23.053252 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:56:23.053263 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:56:23.053271 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:56:23.053279 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:56:23.053287 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:56:23.053319 systemd-journald[178]: Collecting audit messages is disabled. Apr 17 23:56:23.053342 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:56:23.053353 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:56:23.053361 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:56:23.053372 systemd-journald[178]: Journal started Apr 17 23:56:23.053389 systemd-journald[178]: Runtime Journal (/run/log/journal/b193c1677bcc48739bfb50a47c0c8ed0) is 8.0M, max 78.3M, 70.3M free. Apr 17 23:56:23.039009 systemd-modules-load[179]: Inserted module 'overlay' Apr 17 23:56:23.152882 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:56:23.152914 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:56:23.152930 kernel: Bridge firewalling registered Apr 17 23:56:23.075867 systemd-modules-load[179]: Inserted module 'br_netfilter' Apr 17 23:56:23.155093 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:56:23.156188 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:56:23.163758 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:56:23.165730 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:56:23.170752 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:56:23.183221 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:56:23.187095 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:56:23.211451 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:56:23.213436 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:56:23.214649 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:56:23.221806 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:56:23.227972 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:56:23.231970 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:56:23.240493 dracut-cmdline[210]: dracut-dracut-053 Apr 17 23:56:23.245711 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:56:23.255263 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:56:23.285406 systemd-resolved[211]: Positive Trust Anchors: Apr 17 23:56:23.285429 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:56:23.285475 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:56:23.290403 systemd-resolved[211]: Defaulting to hostname 'linux'. Apr 17 23:56:23.291666 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:56:23.296351 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:56:23.340664 kernel: SCSI subsystem initialized Apr 17 23:56:23.350649 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:56:23.361674 kernel: iscsi: registered transport (tcp) Apr 17 23:56:23.384576 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:56:23.384674 kernel: QLogic iSCSI HBA Driver Apr 17 23:56:23.436359 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:56:23.443860 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:56:23.474975 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:56:23.475047 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:56:23.480651 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:56:23.525653 kernel: raid6: avx2x4 gen() 30196 MB/s Apr 17 23:56:23.543645 kernel: raid6: avx2x2 gen() 25158 MB/s Apr 17 23:56:23.561847 kernel: raid6: avx2x1 gen() 21495 MB/s Apr 17 23:56:23.561913 kernel: raid6: using algorithm avx2x4 gen() 30196 MB/s Apr 17 23:56:23.581965 kernel: raid6: .... xor() 4549 MB/s, rmw enabled Apr 17 23:56:23.582029 kernel: raid6: using avx2x2 recovery algorithm Apr 17 23:56:23.603647 kernel: xor: automatically using best checksumming function avx Apr 17 23:56:23.734651 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:56:23.746024 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:56:23.751782 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:56:23.766182 systemd-udevd[397]: Using default interface naming scheme 'v255'. Apr 17 23:56:23.770776 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:56:23.777762 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:56:23.791810 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Apr 17 23:56:23.820873 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:56:23.826741 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:56:23.895081 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:56:23.902796 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:56:23.918546 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:56:23.924195 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:56:23.927076 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:56:23.928719 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:56:23.937249 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:56:23.951298 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:56:23.989670 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 23:56:24.188639 kernel: libata version 3.00 loaded. Apr 17 23:56:24.193035 kernel: AVX2 version of gcm_enc/dec engaged. Apr 17 23:56:24.193081 kernel: AES CTR mode by8 optimization enabled Apr 17 23:56:24.195656 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:56:24.207271 kernel: ahci 0000:00:1f.2: version 3.0 Apr 17 23:56:24.207484 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 17 23:56:24.207499 kernel: scsi host0: Virtio SCSI HBA Apr 17 23:56:24.195773 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:56:24.255871 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 17 23:56:24.256082 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 17 23:56:24.256238 kernel: scsi host1: ahci Apr 17 23:56:24.256411 kernel: scsi host2: ahci Apr 17 23:56:24.256570 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 17 23:56:24.256598 kernel: scsi host3: ahci Apr 17 23:56:24.196675 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:56:24.261285 kernel: scsi host4: ahci Apr 17 23:56:24.261662 kernel: scsi host5: ahci Apr 17 23:56:24.261820 kernel: scsi host6: ahci Apr 17 23:56:24.197401 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:56:24.282551 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Apr 17 23:56:24.282575 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Apr 17 23:56:24.282594 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Apr 17 23:56:24.282604 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Apr 17 23:56:24.282641 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Apr 17 23:56:24.282652 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Apr 17 23:56:24.197560 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:56:24.208982 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:56:24.289865 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:56:24.388536 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:56:24.393751 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:56:24.408085 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:56:24.581585 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 17 23:56:24.581675 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 17 23:56:24.581689 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 17 23:56:24.584636 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 17 23:56:24.584665 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 17 23:56:24.586636 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 17 23:56:24.614810 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 17 23:56:24.615059 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Apr 17 23:56:24.640057 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 17 23:56:24.642224 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 17 23:56:24.642428 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 17 23:56:24.651355 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:56:24.651382 kernel: GPT:9289727 != 167739391 Apr 17 23:56:24.651396 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:56:24.654034 kernel: GPT:9289727 != 167739391 Apr 17 23:56:24.657052 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:56:24.657072 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:56:24.660396 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 17 23:56:24.698635 kernel: BTRFS: device fsid 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 devid 1 transid 32 /dev/sda3 scanned by (udev-worker) (452) Apr 17 23:56:24.698685 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (468) Apr 17 23:56:24.709943 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 17 23:56:24.717335 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 17 23:56:24.723673 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 17 23:56:24.728036 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 17 23:56:24.730479 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 17 23:56:24.736741 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:56:24.747631 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:56:24.747999 disk-uuid[567]: Primary Header is updated. Apr 17 23:56:24.747999 disk-uuid[567]: Secondary Entries is updated. Apr 17 23:56:24.747999 disk-uuid[567]: Secondary Header is updated. Apr 17 23:56:25.764683 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:56:25.768632 disk-uuid[568]: The operation has completed successfully. Apr 17 23:56:25.813682 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:56:25.813815 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:56:25.823771 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:56:25.829702 sh[585]: Success Apr 17 23:56:25.846646 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 17 23:56:25.898576 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:56:25.915659 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:56:25.916762 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:56:25.935940 kernel: BTRFS info (device dm-0): first mount of filesystem 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 Apr 17 23:56:25.935992 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:56:25.938969 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:56:25.944342 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:56:25.944370 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:56:25.955638 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 17 23:56:25.957395 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:56:25.958915 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:56:25.965841 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:56:25.969760 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:56:25.984058 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:56:25.984128 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:56:25.986793 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:56:25.995920 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:56:25.995959 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:56:26.008095 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:56:26.012269 kernel: BTRFS info (device sda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:56:26.019285 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:56:26.026881 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:56:26.114185 ignition[687]: Ignition 2.19.0 Apr 17 23:56:26.115315 ignition[687]: Stage: fetch-offline Apr 17 23:56:26.115379 ignition[687]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:56:26.115393 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 23:56:26.119018 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:56:26.115519 ignition[687]: parsed url from cmdline: "" Apr 17 23:56:26.115524 ignition[687]: no config URL provided Apr 17 23:56:26.115530 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:56:26.123424 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:56:26.115540 ignition[687]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:56:26.115546 ignition[687]: failed to fetch config: resource requires networking Apr 17 23:56:26.115871 ignition[687]: Ignition finished successfully Apr 17 23:56:26.131919 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:56:26.156446 systemd-networkd[772]: lo: Link UP Apr 17 23:56:26.156460 systemd-networkd[772]: lo: Gained carrier Apr 17 23:56:26.158459 systemd-networkd[772]: Enumeration completed Apr 17 23:56:26.159042 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:56:26.159049 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:56:26.160831 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:56:26.162060 systemd-networkd[772]: eth0: Link UP Apr 17 23:56:26.162066 systemd-networkd[772]: eth0: Gained carrier Apr 17 23:56:26.162085 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:56:26.162928 systemd[1]: Reached target network.target - Network. Apr 17 23:56:26.172894 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 17 23:56:26.188179 ignition[774]: Ignition 2.19.0 Apr 17 23:56:26.188198 ignition[774]: Stage: fetch Apr 17 23:56:26.188370 ignition[774]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:56:26.188384 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 23:56:26.188490 ignition[774]: parsed url from cmdline: "" Apr 17 23:56:26.188495 ignition[774]: no config URL provided Apr 17 23:56:26.188500 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:56:26.188511 ignition[774]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:56:26.188533 ignition[774]: PUT http://169.254.169.254/v1/token: attempt #1 Apr 17 23:56:26.188777 ignition[774]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 17 23:56:26.389073 ignition[774]: PUT http://169.254.169.254/v1/token: attempt #2 Apr 17 23:56:26.389413 ignition[774]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 17 23:56:26.789911 ignition[774]: PUT http://169.254.169.254/v1/token: attempt #3 Apr 17 23:56:26.790086 ignition[774]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 17 23:56:27.012691 systemd-networkd[772]: eth0: DHCPv4 address 172.232.15.112/24, gateway 172.232.15.1 acquired from 23.210.200.66 Apr 17 23:56:27.590972 ignition[774]: PUT http://169.254.169.254/v1/token: attempt #4 Apr 17 23:56:27.672738 ignition[774]: PUT result: OK Apr 17 23:56:27.672794 ignition[774]: GET http://169.254.169.254/v1/user-data: attempt #1 Apr 17 23:56:27.784047 ignition[774]: GET result: OK Apr 17 23:56:27.784143 ignition[774]: parsing config with SHA512: 678ec92b262cc9b3422bbab1298574a693a17c5eaf879a8099f9180fab86ae51d4faf864de079405652479e4ce1c31ab691cdec5bb5dd14aed05590f3ea9fc72 Apr 17 23:56:27.787698 unknown[774]: fetched base config from "system" Apr 17 23:56:27.788370 unknown[774]: fetched base config from "system" Apr 17 23:56:27.788378 unknown[774]: fetched user config from "akamai" Apr 17 23:56:27.788711 ignition[774]: fetch: fetch complete Apr 17 23:56:27.788717 ignition[774]: fetch: fetch passed Apr 17 23:56:27.788765 ignition[774]: Ignition finished successfully Apr 17 23:56:27.795164 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 17 23:56:27.807758 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:56:27.823116 ignition[782]: Ignition 2.19.0 Apr 17 23:56:27.823129 ignition[782]: Stage: kargs Apr 17 23:56:27.823358 ignition[782]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:56:27.823376 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 23:56:27.826047 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:56:27.824289 ignition[782]: kargs: kargs passed Apr 17 23:56:27.824333 ignition[782]: Ignition finished successfully Apr 17 23:56:27.831791 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:56:27.847829 ignition[789]: Ignition 2.19.0 Apr 17 23:56:27.847841 ignition[789]: Stage: disks Apr 17 23:56:27.847992 ignition[789]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:56:27.848004 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 23:56:27.850668 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:56:27.848702 ignition[789]: disks: disks passed Apr 17 23:56:27.874506 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:56:27.848742 ignition[789]: Ignition finished successfully Apr 17 23:56:27.875650 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:56:27.877481 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:56:27.878834 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:56:27.880603 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:56:27.888975 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:56:27.905343 systemd-fsck[797]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 17 23:56:27.909137 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:56:27.914749 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:56:28.017638 kernel: EXT4-fs (sda9): mounted filesystem d3c199f8-8065-4f33-a75b-da2f09d4fc39 r/w with ordered data mode. Quota mode: none. Apr 17 23:56:28.019119 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:56:28.020689 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:56:28.027703 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:56:28.031452 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:56:28.032497 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 23:56:28.032553 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:56:28.032586 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:56:28.049353 systemd-networkd[772]: eth0: Gained IPv6LL Apr 17 23:56:28.071388 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (805) Apr 17 23:56:28.071416 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:56:28.071428 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:56:28.071438 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:56:28.071449 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:56:28.071459 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:56:28.056296 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:56:28.076759 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:56:28.079954 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:56:28.128069 initrd-setup-root[830]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:56:28.134835 initrd-setup-root[837]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:56:28.139895 initrd-setup-root[844]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:56:28.145526 initrd-setup-root[851]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:56:28.237592 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:56:28.249694 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:56:28.255766 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:56:28.262369 kernel: BTRFS info (device sda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:56:28.259171 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:56:28.291352 ignition[919]: INFO : Ignition 2.19.0 Apr 17 23:56:28.291352 ignition[919]: INFO : Stage: mount Apr 17 23:56:28.291352 ignition[919]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:56:28.291352 ignition[919]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 23:56:28.297760 ignition[919]: INFO : mount: mount passed Apr 17 23:56:28.297760 ignition[919]: INFO : Ignition finished successfully Apr 17 23:56:28.296971 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:56:28.299674 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:56:28.308720 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:56:29.023763 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:56:29.038636 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (930) Apr 17 23:56:29.038667 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:56:29.042253 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:56:29.046831 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:56:29.051686 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:56:29.051716 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:56:29.056332 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:56:29.076816 ignition[946]: INFO : Ignition 2.19.0 Apr 17 23:56:29.076816 ignition[946]: INFO : Stage: files Apr 17 23:56:29.078735 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:56:29.078735 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 23:56:29.078735 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:56:29.082203 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:56:29.082203 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:56:29.084448 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:56:29.084448 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:56:29.084448 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:56:29.084392 unknown[946]: wrote ssh authorized keys file for user: core Apr 17 23:56:29.088491 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 17 23:56:29.088491 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 17 23:56:29.088491 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:56:29.088491 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 23:56:29.404424 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 17 23:56:29.465214 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:56:29.465214 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:56:29.468371 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:56:29.468371 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:56:29.468371 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:56:29.468371 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:56:29.468371 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:56:29.468371 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:56:29.468371 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:56:29.468371 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:56:29.468371 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:56:29.468371 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:56:29.468371 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:56:29.468371 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:56:29.468371 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 17 23:56:29.895478 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 17 23:56:30.299492 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:56:30.299492 ignition[946]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 17 23:56:30.324185 ignition[946]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 17 23:56:30.324185 ignition[946]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 17 23:56:30.324185 ignition[946]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 17 23:56:30.324185 ignition[946]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 17 23:56:30.324185 ignition[946]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:56:30.324185 ignition[946]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:56:30.324185 ignition[946]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 17 23:56:30.324185 ignition[946]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Apr 17 23:56:30.324185 ignition[946]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 17 23:56:30.324185 ignition[946]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 17 23:56:30.324185 ignition[946]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Apr 17 23:56:30.324185 ignition[946]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:56:30.324185 ignition[946]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:56:30.324185 ignition[946]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:56:30.324185 ignition[946]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:56:30.324185 ignition[946]: INFO : files: files passed Apr 17 23:56:30.324185 ignition[946]: INFO : Ignition finished successfully Apr 17 23:56:30.305833 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:56:30.370021 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:56:30.375753 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:56:30.390200 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:56:30.391244 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:56:30.395693 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:56:30.397425 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:56:30.398890 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:56:30.401102 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:56:30.403584 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:56:30.410773 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:56:30.437019 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:56:30.437147 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:56:30.438586 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:56:30.440776 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:56:30.442510 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:56:30.448736 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:56:30.464198 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:56:30.471763 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:56:30.481068 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:56:30.482199 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:56:30.484120 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:56:30.485872 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:56:30.486061 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:56:30.488135 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:56:30.489470 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:56:30.491113 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:56:30.492690 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:56:30.494242 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:56:30.495927 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:56:30.497606 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:56:30.499395 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:56:30.501082 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:56:30.502882 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:56:30.504529 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:56:30.504673 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:56:30.506590 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:56:30.507708 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:56:30.509257 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:56:30.509677 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:56:30.510988 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:56:30.511091 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:56:30.513759 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:56:30.513961 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:56:30.515776 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:56:30.515879 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:56:30.524844 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:56:30.525964 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:56:30.526146 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:56:30.528906 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:56:30.532005 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:56:30.532734 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:56:30.534754 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:56:30.534924 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:56:30.551409 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:56:30.551533 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:56:30.559953 ignition[999]: INFO : Ignition 2.19.0 Apr 17 23:56:30.559953 ignition[999]: INFO : Stage: umount Apr 17 23:56:30.563870 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:56:30.563870 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 23:56:30.563870 ignition[999]: INFO : umount: umount passed Apr 17 23:56:30.563870 ignition[999]: INFO : Ignition finished successfully Apr 17 23:56:30.562852 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:56:30.562982 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:56:30.566812 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:56:30.566865 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:56:30.567724 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:56:30.567804 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:56:30.595825 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 17 23:56:30.595907 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 17 23:56:30.597198 systemd[1]: Stopped target network.target - Network. Apr 17 23:56:30.598549 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:56:30.598606 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:56:30.601170 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:56:30.603937 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:56:30.607663 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:56:30.614978 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:56:30.616518 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:56:30.619268 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:56:30.619343 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:56:30.620138 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:56:30.620219 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:56:30.620978 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:56:30.621044 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:56:30.622800 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:56:30.622853 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:56:30.624442 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:56:30.626789 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:56:30.629251 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:56:30.629661 systemd-networkd[772]: eth0: DHCPv6 lease lost Apr 17 23:56:30.631715 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:56:30.631857 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:56:30.633260 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:56:30.633301 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:56:30.638820 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:56:30.640317 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:56:30.640381 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:56:30.643764 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:56:30.654157 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:56:30.654274 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:56:30.656851 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:56:30.656980 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:56:30.664933 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:56:30.665099 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:56:30.669910 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:56:30.669969 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:56:30.670817 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:56:30.670860 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:56:30.672443 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:56:30.672498 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:56:30.674706 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:56:30.674756 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:56:30.676285 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:56:30.676336 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:56:30.677833 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:56:30.677883 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:56:30.685776 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:56:30.687474 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:56:30.687538 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:56:30.689218 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:56:30.689273 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:56:30.690975 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:56:30.691026 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:56:30.692704 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:56:30.692757 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:56:30.694646 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:56:30.694697 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:56:30.696801 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:56:30.697135 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:56:30.698593 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:56:30.698725 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:56:30.701455 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:56:30.707779 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:56:30.718417 systemd[1]: Switching root. Apr 17 23:56:30.752685 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Apr 17 23:56:30.752760 systemd-journald[178]: Journal stopped Apr 17 23:56:32.065311 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 23:56:32.065349 kernel: SELinux: policy capability open_perms=1 Apr 17 23:56:32.065361 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 23:56:32.065371 kernel: SELinux: policy capability always_check_network=0 Apr 17 23:56:32.065380 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 23:56:32.065394 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 23:56:32.065404 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 23:56:32.065414 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 23:56:32.065424 kernel: audit: type=1403 audit(1776470190.984:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 23:56:32.065435 systemd[1]: Successfully loaded SELinux policy in 56.289ms. Apr 17 23:56:32.065447 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.794ms. Apr 17 23:56:32.065462 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:56:32.065472 systemd[1]: Detected virtualization kvm. Apr 17 23:56:32.065482 systemd[1]: Detected architecture x86-64. Apr 17 23:56:32.065492 systemd[1]: Detected first boot. Apr 17 23:56:32.065505 systemd[1]: Initializing machine ID from random generator. Apr 17 23:56:32.065516 zram_generator::config[1060]: No configuration found. Apr 17 23:56:32.065527 systemd[1]: Populated /etc with preset unit settings. Apr 17 23:56:32.065537 systemd[1]: Queued start job for default target multi-user.target. Apr 17 23:56:32.065547 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 17 23:56:32.065559 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 23:56:32.065569 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 23:56:32.065580 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 23:56:32.065594 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 23:56:32.065605 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 23:56:32.067065 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 23:56:32.067079 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 23:56:32.067090 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 23:56:32.067101 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:56:32.067111 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:56:32.067127 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 23:56:32.067137 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 23:56:32.067148 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 23:56:32.067158 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:56:32.067168 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 23:56:32.067178 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:56:32.067188 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 23:56:32.067199 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:56:32.067209 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:56:32.067223 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:56:32.067236 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:56:32.067247 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 23:56:32.067257 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 23:56:32.067268 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:56:32.067278 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:56:32.067288 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:56:32.067301 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:56:32.067312 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:56:32.067322 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 23:56:32.067333 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 23:56:32.067344 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 23:56:32.067357 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 23:56:32.067368 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:56:32.067378 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 23:56:32.067389 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 23:56:32.067399 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 23:56:32.067410 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 23:56:32.067421 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:56:32.067431 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:56:32.067444 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 23:56:32.067455 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:56:32.067465 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:56:32.067476 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:56:32.067486 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 23:56:32.067496 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:56:32.067507 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 23:56:32.067518 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 17 23:56:32.067532 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 17 23:56:32.067542 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:56:32.067554 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:56:32.067564 kernel: loop: module loaded Apr 17 23:56:32.067574 kernel: fuse: init (API version 7.39) Apr 17 23:56:32.067585 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 23:56:32.067632 systemd-journald[1166]: Collecting audit messages is disabled. Apr 17 23:56:32.067660 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 23:56:32.067671 kernel: ACPI: bus type drm_connector registered Apr 17 23:56:32.067682 systemd-journald[1166]: Journal started Apr 17 23:56:32.067704 systemd-journald[1166]: Runtime Journal (/run/log/journal/0c5d4a34b23447639a268b348918fccf) is 8.0M, max 78.3M, 70.3M free. Apr 17 23:56:32.077104 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:56:32.082649 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:56:32.088695 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:56:32.089911 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 23:56:32.090989 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 23:56:32.091912 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 23:56:32.092793 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 23:56:32.093739 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 23:56:32.094672 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 23:56:32.095851 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 23:56:32.097052 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:56:32.098324 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 23:56:32.098592 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 23:56:32.099949 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:56:32.100213 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:56:32.101340 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:56:32.101589 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:56:32.102895 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:56:32.103094 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:56:32.104627 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 23:56:32.104893 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 23:56:32.106121 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:56:32.106426 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:56:32.108808 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:56:32.111161 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 23:56:32.114100 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 23:56:32.131145 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 23:56:32.135727 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 23:56:32.148021 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 23:56:32.148940 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 23:56:32.150890 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 23:56:32.165652 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 23:56:32.166567 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:56:32.172603 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 23:56:32.204680 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:56:32.209729 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:56:32.217706 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:56:32.237764 systemd-journald[1166]: Time spent on flushing to /var/log/journal/0c5d4a34b23447639a268b348918fccf is 38.491ms for 960 entries. Apr 17 23:56:32.237764 systemd-journald[1166]: System Journal (/var/log/journal/0c5d4a34b23447639a268b348918fccf) is 8.0M, max 195.6M, 187.6M free. Apr 17 23:56:32.321501 systemd-journald[1166]: Received client request to flush runtime journal. Apr 17 23:56:32.224912 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:56:32.228905 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 23:56:32.229788 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 23:56:32.235758 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 23:56:32.252938 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 23:56:32.257775 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 17 23:56:32.315423 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:56:32.325922 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 23:56:32.330141 udevadm[1210]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 17 23:56:32.340124 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Apr 17 23:56:32.340141 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Apr 17 23:56:32.346723 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:56:32.355859 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 23:56:32.387881 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 23:56:32.405772 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:56:32.424302 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Apr 17 23:56:32.424697 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Apr 17 23:56:32.431209 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:56:32.741951 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 23:56:32.748802 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:56:32.786857 systemd-udevd[1231]: Using default interface naming scheme 'v255'. Apr 17 23:56:32.808209 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:56:32.818813 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:56:32.843809 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 23:56:32.906256 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 17 23:56:32.913795 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 23:56:32.951676 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 17 23:56:32.996694 kernel: ACPI: button: Power Button [PWRF] Apr 17 23:56:33.005713 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 17 23:56:33.005989 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 17 23:56:33.006172 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 17 23:56:33.040728 systemd-networkd[1236]: lo: Link UP Apr 17 23:56:33.041099 systemd-networkd[1236]: lo: Gained carrier Apr 17 23:56:33.051634 kernel: EDAC MC: Ver: 3.0.0 Apr 17 23:56:33.045896 systemd-networkd[1236]: Enumeration completed Apr 17 23:56:33.046025 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:56:33.047294 systemd-networkd[1236]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:56:33.047299 systemd-networkd[1236]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:56:33.049159 systemd-networkd[1236]: eth0: Link UP Apr 17 23:56:33.049165 systemd-networkd[1236]: eth0: Gained carrier Apr 17 23:56:33.049177 systemd-networkd[1236]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:56:33.056732 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 23:56:33.072084 systemd-networkd[1236]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:56:33.087222 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 17 23:56:33.096648 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 23:56:33.106688 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1232) Apr 17 23:56:33.112864 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:56:33.168398 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 17 23:56:33.174094 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 17 23:56:33.182211 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 17 23:56:33.196117 lvm[1275]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:56:33.221066 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 17 23:56:33.292114 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:56:33.293899 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:56:33.300760 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 17 23:56:33.311428 lvm[1282]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:56:33.347288 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 17 23:56:33.348753 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:56:33.349855 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 23:56:33.349951 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:56:33.350952 systemd[1]: Reached target machines.target - Containers. Apr 17 23:56:33.352937 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 17 23:56:33.365782 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 23:56:33.369758 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 23:56:33.370880 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:56:33.372218 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 23:56:33.385935 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 17 23:56:33.390442 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 23:56:33.394539 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 23:56:33.409767 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 23:56:33.415006 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 17 23:56:33.425197 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 23:56:33.432745 kernel: loop0: detected capacity change from 0 to 142488 Apr 17 23:56:33.464645 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 23:56:33.491692 kernel: loop1: detected capacity change from 0 to 140768 Apr 17 23:56:33.529650 kernel: loop2: detected capacity change from 0 to 228704 Apr 17 23:56:33.570655 kernel: loop3: detected capacity change from 0 to 8 Apr 17 23:56:33.596647 kernel: loop4: detected capacity change from 0 to 142488 Apr 17 23:56:33.617645 kernel: loop5: detected capacity change from 0 to 140768 Apr 17 23:56:33.639648 kernel: loop6: detected capacity change from 0 to 228704 Apr 17 23:56:33.656877 kernel: loop7: detected capacity change from 0 to 8 Apr 17 23:56:33.657940 (sd-merge)[1303]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Apr 17 23:56:33.659125 (sd-merge)[1303]: Merged extensions into '/usr'. Apr 17 23:56:33.684269 systemd[1]: Reloading requested from client PID 1290 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 23:56:33.684440 systemd[1]: Reloading... Apr 17 23:56:33.788643 zram_generator::config[1331]: No configuration found. Apr 17 23:56:33.887692 ldconfig[1286]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 23:56:33.923672 systemd-networkd[1236]: eth0: DHCPv4 address 172.232.15.112/24, gateway 172.232.15.1 acquired from 23.210.200.66 Apr 17 23:56:33.958114 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:56:34.022539 systemd[1]: Reloading finished in 337 ms. Apr 17 23:56:34.040924 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 23:56:34.042517 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 23:56:34.055929 systemd[1]: Starting ensure-sysext.service... Apr 17 23:56:34.066809 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:56:34.077179 systemd[1]: Reloading requested from client PID 1381 ('systemctl') (unit ensure-sysext.service)... Apr 17 23:56:34.077347 systemd[1]: Reloading... Apr 17 23:56:34.094149 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 23:56:34.094505 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 23:56:34.096113 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 23:56:34.096541 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Apr 17 23:56:34.096650 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Apr 17 23:56:34.100603 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:56:34.100732 systemd-tmpfiles[1383]: Skipping /boot Apr 17 23:56:34.119265 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:56:34.119287 systemd-tmpfiles[1383]: Skipping /boot Apr 17 23:56:34.179583 zram_generator::config[1413]: No configuration found. Apr 17 23:56:34.300794 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:56:34.369138 systemd[1]: Reloading finished in 291 ms. Apr 17 23:56:34.391579 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:56:34.412782 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:56:34.418803 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 23:56:34.426831 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 23:56:34.441816 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:56:34.453817 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 23:56:34.464098 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:56:34.464365 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:56:34.473031 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:56:34.485283 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:56:34.490975 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:56:34.492045 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:56:34.492149 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:56:34.502307 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:56:34.502549 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:56:34.502781 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:56:34.502896 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:56:34.514979 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 23:56:34.520676 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 23:56:34.524354 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:56:34.524576 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:56:34.530396 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:56:34.532692 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:56:34.538454 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:56:34.539055 augenrules[1493]: No rules Apr 17 23:56:34.543330 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:56:34.543689 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:56:34.558263 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:56:34.559579 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:56:34.568970 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:56:34.575922 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:56:34.587972 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:56:34.600043 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:56:34.603961 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:56:34.613486 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 23:56:34.618193 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:56:34.620252 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 23:56:34.621679 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:56:34.622072 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:56:34.625179 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:56:34.625412 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:56:34.628264 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:56:34.628483 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:56:34.631112 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:56:34.631378 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:56:34.632366 systemd-resolved[1476]: Positive Trust Anchors: Apr 17 23:56:34.633660 systemd-resolved[1476]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:56:34.633750 systemd-resolved[1476]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:56:34.645743 systemd-resolved[1476]: Defaulting to hostname 'linux'. Apr 17 23:56:34.646099 systemd[1]: Finished ensure-sysext.service. Apr 17 23:56:34.648230 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 23:56:34.649459 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:56:34.655036 systemd[1]: Reached target network.target - Network. Apr 17 23:56:34.655788 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:56:34.656593 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:56:34.656785 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:56:34.663786 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 17 23:56:34.664626 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:56:34.751248 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 17 23:56:34.752349 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:56:34.753262 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 23:56:34.754106 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 23:56:34.754917 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 23:56:34.755756 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 23:56:34.755790 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:56:34.756490 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 23:56:34.757593 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 23:56:34.758476 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 23:56:34.759415 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:56:34.760759 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 23:56:34.763583 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 23:56:34.765888 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 23:56:34.769400 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 23:56:34.770423 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:56:34.771291 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:56:34.772546 systemd[1]: System is tainted: cgroupsv1 Apr 17 23:56:34.772674 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:56:34.772767 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:56:34.775300 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 23:56:34.779743 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 17 23:56:34.783694 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 23:56:34.789685 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 23:56:34.796342 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 23:56:34.802707 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 23:56:34.819765 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 23:56:34.827800 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 23:56:34.843237 jq[1533]: false Apr 17 23:56:34.842770 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 23:56:34.853751 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 23:56:34.874814 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 23:56:34.882652 extend-filesystems[1534]: Found loop4 Apr 17 23:56:34.882652 extend-filesystems[1534]: Found loop5 Apr 17 23:56:34.882652 extend-filesystems[1534]: Found loop6 Apr 17 23:56:34.882652 extend-filesystems[1534]: Found loop7 Apr 17 23:56:34.882652 extend-filesystems[1534]: Found sda Apr 17 23:56:34.882652 extend-filesystems[1534]: Found sda1 Apr 17 23:56:34.882652 extend-filesystems[1534]: Found sda2 Apr 17 23:56:34.882652 extend-filesystems[1534]: Found sda3 Apr 17 23:56:34.882652 extend-filesystems[1534]: Found usr Apr 17 23:56:34.882652 extend-filesystems[1534]: Found sda4 Apr 17 23:56:34.882652 extend-filesystems[1534]: Found sda6 Apr 17 23:56:34.882652 extend-filesystems[1534]: Found sda7 Apr 17 23:56:34.882652 extend-filesystems[1534]: Found sda9 Apr 17 23:56:34.882652 extend-filesystems[1534]: Checking size of /dev/sda9 Apr 17 23:56:35.476266 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Apr 17 23:56:34.880789 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 23:56:35.476413 extend-filesystems[1534]: Resized partition /dev/sda9 Apr 17 23:56:34.904059 dbus-daemon[1531]: [system] SELinux support is enabled Apr 17 23:56:34.891771 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 23:56:35.507171 update_engine[1553]: I20260417 23:56:35.487489 1553 main.cc:92] Flatcar Update Engine starting Apr 17 23:56:35.507171 update_engine[1553]: I20260417 23:56:35.491020 1553 update_check_scheduler.cc:74] Next update check in 8m47s Apr 17 23:56:35.507709 extend-filesystems[1562]: resize2fs 1.47.1 (20-May-2024) Apr 17 23:56:35.431073 dbus-daemon[1531]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1236 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 17 23:56:35.425573 systemd-resolved[1476]: Clock change detected. Flushing caches. Apr 17 23:56:35.425705 systemd-timesyncd[1525]: Contacted time server 149.248.12.167:123 (0.flatcar.pool.ntp.org). Apr 17 23:56:35.518926 jq[1559]: true Apr 17 23:56:35.426170 systemd-timesyncd[1525]: Initial clock synchronization to Fri 2026-04-17 23:56:35.425538 UTC. Apr 17 23:56:35.432261 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 23:56:35.437368 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 23:56:35.445857 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 23:56:35.446966 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 23:56:35.447555 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 23:56:35.447849 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 23:56:35.459552 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 23:56:35.469564 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 23:56:35.509230 (ntainerd)[1571]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 23:56:35.524832 tar[1566]: linux-amd64/LICENSE Apr 17 23:56:35.524832 tar[1566]: linux-amd64/helm Apr 17 23:56:35.521014 dbus-daemon[1531]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 17 23:56:35.526026 systemd[1]: Started update-engine.service - Update Engine. Apr 17 23:56:35.529889 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 23:56:35.529921 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 23:56:35.536238 coreos-metadata[1530]: Apr 17 23:56:35.536 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Apr 17 23:56:35.542259 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 17 23:56:35.542462 systemd-networkd[1236]: eth0: Gained IPv6LL Apr 17 23:56:35.543224 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 23:56:35.543258 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 23:56:35.545777 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 23:56:35.554657 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 23:56:35.568773 jq[1570]: true Apr 17 23:56:35.567645 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 23:56:35.624872 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 23:56:35.639239 coreos-metadata[1530]: Apr 17 23:56:35.637 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Apr 17 23:56:35.638554 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:56:35.650242 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1244) Apr 17 23:56:35.663000 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 23:56:35.838208 coreos-metadata[1530]: Apr 17 23:56:35.829 INFO Fetch successful Apr 17 23:56:35.838208 coreos-metadata[1530]: Apr 17 23:56:35.830 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Apr 17 23:56:35.823622 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 23:56:35.882204 systemd-logind[1550]: Watching system buttons on /dev/input/event1 (Power Button) Apr 17 23:56:35.882605 systemd-logind[1550]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 23:56:35.883849 systemd-logind[1550]: New seat seat0. Apr 17 23:56:35.897053 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 23:56:35.902250 bash[1610]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:56:35.908941 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 23:56:35.926485 systemd[1]: Starting sshkeys.service... Apr 17 23:56:35.977407 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 17 23:56:36.130821 coreos-metadata[1530]: Apr 17 23:56:36.129 INFO Fetch successful Apr 17 23:56:35.980373 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 17 23:56:36.180237 dbus-daemon[1531]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 17 23:56:36.180502 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 17 23:56:36.183375 dbus-daemon[1531]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1580 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 17 23:56:36.271890 systemd[1]: Starting polkit.service - Authorization Manager... Apr 17 23:56:36.507616 polkitd[1627]: Started polkitd version 121 Apr 17 23:56:36.596783 locksmithd[1581]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 23:56:36.671784 polkitd[1627]: Loading rules from directory /etc/polkit-1/rules.d Apr 17 23:56:36.671851 polkitd[1627]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 17 23:56:36.712064 coreos-metadata[1623]: Apr 17 23:56:36.711 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Apr 17 23:56:36.732539 sshd_keygen[1561]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 23:56:36.738030 polkitd[1627]: Finished loading, compiling and executing 2 rules Apr 17 23:56:36.738725 dbus-daemon[1531]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 17 23:56:36.740315 polkitd[1627]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 17 23:56:36.757681 systemd[1]: Started polkit.service - Authorization Manager. Apr 17 23:56:36.964482 coreos-metadata[1623]: Apr 17 23:56:36.948 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Apr 17 23:56:37.004221 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 23:56:37.023800 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 23:56:37.113543 systemd-hostnamed[1580]: Hostname set to <172-232-15-112> (transient) Apr 17 23:56:37.118722 systemd-resolved[1476]: System hostname changed to '172-232-15-112'. Apr 17 23:56:37.119600 coreos-metadata[1623]: Apr 17 23:56:37.119 INFO Fetch successful Apr 17 23:56:37.125491 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 23:56:37.125833 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 23:56:37.148887 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 23:56:37.156385 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 17 23:56:37.201145 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Apr 17 23:56:37.212934 extend-filesystems[1562]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 17 23:56:37.212934 extend-filesystems[1562]: old_desc_blocks = 1, new_desc_blocks = 10 Apr 17 23:56:37.212934 extend-filesystems[1562]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Apr 17 23:56:37.231191 extend-filesystems[1534]: Resized filesystem in /dev/sda9 Apr 17 23:56:37.215346 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 23:56:37.215651 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 23:56:37.236703 containerd[1571]: time="2026-04-17T23:56:37.236593045Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 17 23:56:37.244810 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 23:56:37.280048 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 17 23:56:37.363275 update-ssh-keys[1684]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:56:37.286136 systemd[1]: Finished sshkeys.service. Apr 17 23:56:37.954064 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 23:56:37.982542 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 23:56:37.985415 containerd[1571]: time="2026-04-17T23:56:37.985373715Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:56:38.007163 containerd[1571]: time="2026-04-17T23:56:38.006101855Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:56:38.007163 containerd[1571]: time="2026-04-17T23:56:38.006161425Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 17 23:56:38.007163 containerd[1571]: time="2026-04-17T23:56:38.006180075Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 17 23:56:38.007163 containerd[1571]: time="2026-04-17T23:56:38.006385455Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 17 23:56:38.007163 containerd[1571]: time="2026-04-17T23:56:38.006402495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 17 23:56:38.007163 containerd[1571]: time="2026-04-17T23:56:38.006471535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:56:38.007163 containerd[1571]: time="2026-04-17T23:56:38.006485555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:56:38.007163 containerd[1571]: time="2026-04-17T23:56:38.006765465Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:56:38.007163 containerd[1571]: time="2026-04-17T23:56:38.006781405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 17 23:56:38.007163 containerd[1571]: time="2026-04-17T23:56:38.006795135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:56:38.007163 containerd[1571]: time="2026-04-17T23:56:38.006806005Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 17 23:56:38.007440 containerd[1571]: time="2026-04-17T23:56:38.006895315Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:56:38.009083 containerd[1571]: time="2026-04-17T23:56:38.007608135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:56:38.009083 containerd[1571]: time="2026-04-17T23:56:38.007803685Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:56:38.009083 containerd[1571]: time="2026-04-17T23:56:38.007822015Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 17 23:56:38.009083 containerd[1571]: time="2026-04-17T23:56:38.007930235Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 17 23:56:38.009083 containerd[1571]: time="2026-04-17T23:56:38.007986495Z" level=info msg="metadata content store policy set" policy=shared Apr 17 23:56:38.017139 containerd[1571]: time="2026-04-17T23:56:38.016674275Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 17 23:56:38.017139 containerd[1571]: time="2026-04-17T23:56:38.016753745Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 17 23:56:38.017139 containerd[1571]: time="2026-04-17T23:56:38.016806565Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 17 23:56:38.017139 containerd[1571]: time="2026-04-17T23:56:38.016823815Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 17 23:56:38.017139 containerd[1571]: time="2026-04-17T23:56:38.016844595Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 17 23:56:38.017139 containerd[1571]: time="2026-04-17T23:56:38.017047085Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 17 23:56:38.017862 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 23:56:38.033373 containerd[1571]: time="2026-04-17T23:56:38.032727455Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 17 23:56:38.033373 containerd[1571]: time="2026-04-17T23:56:38.032998175Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 17 23:56:38.033373 containerd[1571]: time="2026-04-17T23:56:38.033039165Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 17 23:56:38.033373 containerd[1571]: time="2026-04-17T23:56:38.033075725Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 17 23:56:38.033373 containerd[1571]: time="2026-04-17T23:56:38.033111115Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 17 23:56:38.033373 containerd[1571]: time="2026-04-17T23:56:38.033147865Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 17 23:56:38.033373 containerd[1571]: time="2026-04-17T23:56:38.033160525Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 17 23:56:38.033373 containerd[1571]: time="2026-04-17T23:56:38.033175175Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 17 23:56:38.033373 containerd[1571]: time="2026-04-17T23:56:38.033191375Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 17 23:56:38.033373 containerd[1571]: time="2026-04-17T23:56:38.033205345Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 17 23:56:38.033373 containerd[1571]: time="2026-04-17T23:56:38.033227825Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 17 23:56:38.033373 containerd[1571]: time="2026-04-17T23:56:38.033240355Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 17 23:56:38.033373 containerd[1571]: time="2026-04-17T23:56:38.033263735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 17 23:56:38.033373 containerd[1571]: time="2026-04-17T23:56:38.033277225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 17 23:56:38.021059 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 23:56:38.033705 containerd[1571]: time="2026-04-17T23:56:38.033290075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 17 23:56:38.033705 containerd[1571]: time="2026-04-17T23:56:38.033304735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 17 23:56:38.033705 containerd[1571]: time="2026-04-17T23:56:38.033341125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 17 23:56:38.033705 containerd[1571]: time="2026-04-17T23:56:38.033364785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 17 23:56:38.033705 containerd[1571]: time="2026-04-17T23:56:38.033376085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 17 23:56:38.033705 containerd[1571]: time="2026-04-17T23:56:38.033393205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 17 23:56:38.033705 containerd[1571]: time="2026-04-17T23:56:38.033405855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 17 23:56:38.033705 containerd[1571]: time="2026-04-17T23:56:38.033431945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 17 23:56:38.033705 containerd[1571]: time="2026-04-17T23:56:38.033444215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 17 23:56:38.033705 containerd[1571]: time="2026-04-17T23:56:38.033476535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 17 23:56:38.033705 containerd[1571]: time="2026-04-17T23:56:38.033490405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 17 23:56:38.033705 containerd[1571]: time="2026-04-17T23:56:38.033505585Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 17 23:56:38.033705 containerd[1571]: time="2026-04-17T23:56:38.033530375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 17 23:56:38.033705 containerd[1571]: time="2026-04-17T23:56:38.033543535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 17 23:56:38.033705 containerd[1571]: time="2026-04-17T23:56:38.033554195Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 17 23:56:38.025952 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 23:56:38.034044 containerd[1571]: time="2026-04-17T23:56:38.033606465Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 17 23:56:38.034044 containerd[1571]: time="2026-04-17T23:56:38.033639125Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 17 23:56:38.034044 containerd[1571]: time="2026-04-17T23:56:38.033650445Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 17 23:56:38.034044 containerd[1571]: time="2026-04-17T23:56:38.033661365Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 17 23:56:38.034044 containerd[1571]: time="2026-04-17T23:56:38.033682155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 17 23:56:38.034044 containerd[1571]: time="2026-04-17T23:56:38.033696895Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 17 23:56:38.034044 containerd[1571]: time="2026-04-17T23:56:38.033728275Z" level=info msg="NRI interface is disabled by configuration." Apr 17 23:56:38.034044 containerd[1571]: time="2026-04-17T23:56:38.033747945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 17 23:56:38.034235 systemd[1]: Started sshd@0-172.232.15.112:22-50.85.169.122:33180.service - OpenSSH per-connection server daemon (50.85.169.122:33180). Apr 17 23:56:38.035271 containerd[1571]: time="2026-04-17T23:56:38.034078155Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 17 23:56:38.035271 containerd[1571]: time="2026-04-17T23:56:38.034191215Z" level=info msg="Connect containerd service" Apr 17 23:56:38.035271 containerd[1571]: time="2026-04-17T23:56:38.034269035Z" level=info msg="using legacy CRI server" Apr 17 23:56:38.035271 containerd[1571]: time="2026-04-17T23:56:38.034278885Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 23:56:38.035271 containerd[1571]: time="2026-04-17T23:56:38.034439035Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 17 23:56:38.035271 containerd[1571]: time="2026-04-17T23:56:38.035197715Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:56:38.035981 containerd[1571]: time="2026-04-17T23:56:38.035543525Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 23:56:38.035981 containerd[1571]: time="2026-04-17T23:56:38.035620495Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 23:56:38.035981 containerd[1571]: time="2026-04-17T23:56:38.035886875Z" level=info msg="Start subscribing containerd event" Apr 17 23:56:38.035981 containerd[1571]: time="2026-04-17T23:56:38.035933055Z" level=info msg="Start recovering state" Apr 17 23:56:38.036064 containerd[1571]: time="2026-04-17T23:56:38.036000265Z" level=info msg="Start event monitor" Apr 17 23:56:38.036064 containerd[1571]: time="2026-04-17T23:56:38.036014715Z" level=info msg="Start snapshots syncer" Apr 17 23:56:38.036064 containerd[1571]: time="2026-04-17T23:56:38.036023655Z" level=info msg="Start cni network conf syncer for default" Apr 17 23:56:38.036064 containerd[1571]: time="2026-04-17T23:56:38.036031195Z" level=info msg="Start streaming server" Apr 17 23:56:38.036664 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 23:56:38.057875 containerd[1571]: time="2026-04-17T23:56:38.054610505Z" level=info msg="containerd successfully booted in 0.841997s" Apr 17 23:56:38.924212 sshd[1700]: Accepted publickey for core from 50.85.169.122 port 33180 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 17 23:56:38.927057 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:56:39.011033 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 23:56:39.058645 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 23:56:39.287509 systemd-logind[1550]: New session 1 of user core. Apr 17 23:56:39.367604 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 23:56:39.385874 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 23:56:39.395402 (systemd)[1707]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 23:56:39.529393 tar[1566]: linux-amd64/README.md Apr 17 23:56:39.623726 systemd[1707]: Queued start job for default target default.target. Apr 17 23:56:39.625368 systemd[1707]: Created slice app.slice - User Application Slice. Apr 17 23:56:39.625393 systemd[1707]: Reached target paths.target - Paths. Apr 17 23:56:39.625408 systemd[1707]: Reached target timers.target - Timers. Apr 17 23:56:39.631569 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 23:56:39.637281 systemd[1707]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 23:56:39.707640 systemd[1707]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 23:56:39.707726 systemd[1707]: Reached target sockets.target - Sockets. Apr 17 23:56:39.707747 systemd[1707]: Reached target basic.target - Basic System. Apr 17 23:56:39.707797 systemd[1707]: Reached target default.target - Main User Target. Apr 17 23:56:39.707837 systemd[1707]: Startup finished in 278ms. Apr 17 23:56:39.708091 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 23:56:39.714466 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 23:56:40.192516 systemd[1]: Started sshd@1-172.232.15.112:22-50.85.169.122:38228.service - OpenSSH per-connection server daemon (50.85.169.122:38228). Apr 17 23:56:40.919935 sshd[1724]: Accepted publickey for core from 50.85.169.122 port 38228 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 17 23:56:40.919825 sshd[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:56:40.930863 systemd-logind[1550]: New session 2 of user core. Apr 17 23:56:40.936384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:56:40.942589 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 23:56:40.942750 (kubelet)[1733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:56:40.943541 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 23:56:40.947298 systemd[1]: Startup finished in 9.304s (kernel) + 9.500s (userspace) = 18.805s. Apr 17 23:56:41.369456 sshd[1724]: pam_unix(sshd:session): session closed for user core Apr 17 23:56:41.374652 systemd-logind[1550]: Session 2 logged out. Waiting for processes to exit. Apr 17 23:56:41.376739 systemd[1]: sshd@1-172.232.15.112:22-50.85.169.122:38228.service: Deactivated successfully. Apr 17 23:56:41.380223 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 23:56:41.381038 systemd-logind[1550]: Removed session 2. Apr 17 23:56:41.488601 systemd[1]: Started sshd@2-172.232.15.112:22-50.85.169.122:38240.service - OpenSSH per-connection server daemon (50.85.169.122:38240). Apr 17 23:56:42.090340 sshd[1749]: Accepted publickey for core from 50.85.169.122 port 38240 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 17 23:56:42.093435 sshd[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:56:42.100613 systemd-logind[1550]: New session 3 of user core. Apr 17 23:56:42.121731 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 23:56:42.219525 kubelet[1733]: E0417 23:56:42.219452 1733 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:56:42.223686 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:56:42.224092 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:56:42.513010 sshd[1749]: pam_unix(sshd:session): session closed for user core Apr 17 23:56:42.518333 systemd[1]: sshd@2-172.232.15.112:22-50.85.169.122:38240.service: Deactivated successfully. Apr 17 23:56:42.523151 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 23:56:42.523281 systemd-logind[1550]: Session 3 logged out. Waiting for processes to exit. Apr 17 23:56:42.525346 systemd-logind[1550]: Removed session 3. Apr 17 23:56:42.617639 systemd[1]: Started sshd@3-172.232.15.112:22-50.85.169.122:38248.service - OpenSSH per-connection server daemon (50.85.169.122:38248). Apr 17 23:56:43.212198 sshd[1760]: Accepted publickey for core from 50.85.169.122 port 38248 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 17 23:56:43.213056 sshd[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:56:43.218911 systemd-logind[1550]: New session 4 of user core. Apr 17 23:56:43.228624 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 23:56:43.641871 sshd[1760]: pam_unix(sshd:session): session closed for user core Apr 17 23:56:43.646557 systemd-logind[1550]: Session 4 logged out. Waiting for processes to exit. Apr 17 23:56:43.647836 systemd[1]: sshd@3-172.232.15.112:22-50.85.169.122:38248.service: Deactivated successfully. Apr 17 23:56:43.651070 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 23:56:43.652208 systemd-logind[1550]: Removed session 4. Apr 17 23:56:43.745478 systemd[1]: Started sshd@4-172.232.15.112:22-50.85.169.122:38258.service - OpenSSH per-connection server daemon (50.85.169.122:38258). Apr 17 23:56:44.343734 sshd[1768]: Accepted publickey for core from 50.85.169.122 port 38258 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 17 23:56:44.344375 sshd[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:56:44.349870 systemd-logind[1550]: New session 5 of user core. Apr 17 23:56:44.355489 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 23:56:44.685367 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 23:56:44.685781 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:56:44.703872 sudo[1772]: pam_unix(sudo:session): session closed for user root Apr 17 23:56:44.800972 sshd[1768]: pam_unix(sshd:session): session closed for user core Apr 17 23:56:44.805346 systemd[1]: sshd@4-172.232.15.112:22-50.85.169.122:38258.service: Deactivated successfully. Apr 17 23:56:44.808113 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 23:56:44.808605 systemd-logind[1550]: Session 5 logged out. Waiting for processes to exit. Apr 17 23:56:44.809993 systemd-logind[1550]: Removed session 5. Apr 17 23:56:44.907356 systemd[1]: Started sshd@5-172.232.15.112:22-50.85.169.122:38266.service - OpenSSH per-connection server daemon (50.85.169.122:38266). Apr 17 23:56:45.504189 sshd[1777]: Accepted publickey for core from 50.85.169.122 port 38266 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 17 23:56:45.506512 sshd[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:56:45.513439 systemd-logind[1550]: New session 6 of user core. Apr 17 23:56:45.522512 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 23:56:45.839655 sudo[1782]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 23:56:45.862379 sudo[1782]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:56:45.869168 sudo[1782]: pam_unix(sudo:session): session closed for user root Apr 17 23:56:45.876997 sudo[1781]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 17 23:56:45.877428 sudo[1781]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:56:45.893397 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 17 23:56:45.897474 auditctl[1785]: No rules Apr 17 23:56:45.897987 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 23:56:45.898357 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 17 23:56:45.905583 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:56:45.999232 augenrules[1804]: No rules Apr 17 23:56:46.001048 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:56:46.004609 sudo[1781]: pam_unix(sudo:session): session closed for user root Apr 17 23:56:46.102110 sshd[1777]: pam_unix(sshd:session): session closed for user core Apr 17 23:56:46.108710 systemd[1]: sshd@5-172.232.15.112:22-50.85.169.122:38266.service: Deactivated successfully. Apr 17 23:56:46.112043 systemd-logind[1550]: Session 6 logged out. Waiting for processes to exit. Apr 17 23:56:46.112799 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 23:56:46.115569 systemd-logind[1550]: Removed session 6. Apr 17 23:56:46.213385 systemd[1]: Started sshd@6-172.232.15.112:22-50.85.169.122:38270.service - OpenSSH per-connection server daemon (50.85.169.122:38270). Apr 17 23:56:46.808557 sshd[1813]: Accepted publickey for core from 50.85.169.122 port 38270 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 17 23:56:46.810467 sshd[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:56:46.816553 systemd-logind[1550]: New session 7 of user core. Apr 17 23:56:46.825742 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 23:56:47.143791 sudo[1817]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 23:56:47.144351 sudo[1817]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:56:49.318357 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 23:56:49.355062 (dockerd)[1832]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 23:56:51.494348 dockerd[1832]: time="2026-04-17T23:56:51.493760645Z" level=info msg="Starting up" Apr 17 23:56:51.962399 dockerd[1832]: time="2026-04-17T23:56:51.961897405Z" level=info msg="Loading containers: start." Apr 17 23:56:52.128163 kernel: Initializing XFRM netlink socket Apr 17 23:56:52.252083 systemd-networkd[1236]: docker0: Link UP Apr 17 23:56:52.265568 dockerd[1832]: time="2026-04-17T23:56:52.265518395Z" level=info msg="Loading containers: done." Apr 17 23:56:52.299875 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 23:56:52.306220 dockerd[1832]: time="2026-04-17T23:56:52.306112385Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 23:56:52.306371 dockerd[1832]: time="2026-04-17T23:56:52.306316555Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 17 23:56:52.306470 dockerd[1832]: time="2026-04-17T23:56:52.306448575Z" level=info msg="Daemon has completed initialization" Apr 17 23:56:52.308263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:56:52.554958 dockerd[1832]: time="2026-04-17T23:56:52.554183625Z" level=info msg="API listen on /run/docker.sock" Apr 17 23:56:52.563033 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 23:56:52.747872 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:56:52.756129 (kubelet)[1981]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:56:53.046304 kubelet[1981]: E0417 23:56:53.045368 1981 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:56:53.060317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:56:53.060663 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:56:53.749292 containerd[1571]: time="2026-04-17T23:56:53.749150865Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 17 23:56:54.523402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4270840330.mount: Deactivated successfully. Apr 17 23:56:57.583966 containerd[1571]: time="2026-04-17T23:56:57.583820615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:56:57.585636 containerd[1571]: time="2026-04-17T23:56:57.585517005Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193995" Apr 17 23:56:57.586581 containerd[1571]: time="2026-04-17T23:56:57.586555245Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:56:57.590973 containerd[1571]: time="2026-04-17T23:56:57.590943385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:56:57.592547 containerd[1571]: time="2026-04-17T23:56:57.592491435Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 3.84319304s" Apr 17 23:56:57.592642 containerd[1571]: time="2026-04-17T23:56:57.592618185Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 17 23:56:57.601288 containerd[1571]: time="2026-04-17T23:56:57.601261575Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 17 23:57:00.133620 containerd[1571]: time="2026-04-17T23:57:00.133541095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:00.135324 containerd[1571]: time="2026-04-17T23:57:00.135284045Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171453" Apr 17 23:57:00.136687 containerd[1571]: time="2026-04-17T23:57:00.135556515Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:00.138673 containerd[1571]: time="2026-04-17T23:57:00.138621935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:00.140745 containerd[1571]: time="2026-04-17T23:57:00.140101685Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 2.53872069s" Apr 17 23:57:00.140745 containerd[1571]: time="2026-04-17T23:57:00.140178865Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 17 23:57:00.141989 containerd[1571]: time="2026-04-17T23:57:00.141968365Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 17 23:57:02.319250 containerd[1571]: time="2026-04-17T23:57:02.319171445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:02.320780 containerd[1571]: time="2026-04-17T23:57:02.320686945Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289762" Apr 17 23:57:02.323392 containerd[1571]: time="2026-04-17T23:57:02.322360155Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:02.330983 containerd[1571]: time="2026-04-17T23:57:02.330894475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:02.333219 containerd[1571]: time="2026-04-17T23:57:02.333180965Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 2.19118498s" Apr 17 23:57:02.333318 containerd[1571]: time="2026-04-17T23:57:02.333303145Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 17 23:57:02.336268 containerd[1571]: time="2026-04-17T23:57:02.336249575Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 17 23:57:03.334962 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 17 23:57:03.408050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:57:03.824267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:57:03.843803 (kubelet)[2073]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:57:04.092072 kubelet[2073]: E0417 23:57:04.091618 2073 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:57:04.114017 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:57:04.114543 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:57:04.439483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2455125825.mount: Deactivated successfully. Apr 17 23:57:05.954414 containerd[1571]: time="2026-04-17T23:57:05.954291445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:05.955761 containerd[1571]: time="2026-04-17T23:57:05.955682645Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010717" Apr 17 23:57:05.956212 containerd[1571]: time="2026-04-17T23:57:05.956174525Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:05.959328 containerd[1571]: time="2026-04-17T23:57:05.959245635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:05.960656 containerd[1571]: time="2026-04-17T23:57:05.959887555Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 3.62352607s" Apr 17 23:57:05.960656 containerd[1571]: time="2026-04-17T23:57:05.959973775Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 17 23:57:05.963685 containerd[1571]: time="2026-04-17T23:57:05.963640025Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 17 23:57:06.501349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount77848531.mount: Deactivated successfully. Apr 17 23:57:07.171497 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 17 23:57:08.688411 containerd[1571]: time="2026-04-17T23:57:08.688281409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:08.700972 containerd[1571]: time="2026-04-17T23:57:08.700890193Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942244" Apr 17 23:57:08.706494 containerd[1571]: time="2026-04-17T23:57:08.706440454Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:08.708779 containerd[1571]: time="2026-04-17T23:57:08.708733370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:08.710003 containerd[1571]: time="2026-04-17T23:57:08.709846051Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.746166707s" Apr 17 23:57:08.710003 containerd[1571]: time="2026-04-17T23:57:08.709878797Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 17 23:57:08.711874 containerd[1571]: time="2026-04-17T23:57:08.711855032Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 17 23:57:09.294916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3665302205.mount: Deactivated successfully. Apr 17 23:57:09.300372 containerd[1571]: time="2026-04-17T23:57:09.300303244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:09.301496 containerd[1571]: time="2026-04-17T23:57:09.301414025Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321144" Apr 17 23:57:09.301831 containerd[1571]: time="2026-04-17T23:57:09.301809499Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:09.304101 containerd[1571]: time="2026-04-17T23:57:09.304074365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:09.305560 containerd[1571]: time="2026-04-17T23:57:09.304846795Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 592.958747ms" Apr 17 23:57:09.305560 containerd[1571]: time="2026-04-17T23:57:09.304918347Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 17 23:57:09.306269 containerd[1571]: time="2026-04-17T23:57:09.306241393Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 17 23:57:09.811712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3565789737.mount: Deactivated successfully. Apr 17 23:57:10.988596 containerd[1571]: time="2026-04-17T23:57:10.988519042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:10.989849 containerd[1571]: time="2026-04-17T23:57:10.989791253Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23719432" Apr 17 23:57:10.990344 containerd[1571]: time="2026-04-17T23:57:10.990284969Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:10.995214 containerd[1571]: time="2026-04-17T23:57:10.995171386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:10.997212 containerd[1571]: time="2026-04-17T23:57:10.996275765Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.689996587s" Apr 17 23:57:10.997212 containerd[1571]: time="2026-04-17T23:57:10.996306512Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 17 23:57:13.283463 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:57:13.292904 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:57:13.347623 systemd[1]: Reloading requested from client PID 2239 ('systemctl') (unit session-7.scope)... Apr 17 23:57:13.347650 systemd[1]: Reloading... Apr 17 23:57:13.521235 zram_generator::config[2278]: No configuration found. Apr 17 23:57:13.679810 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:57:13.758698 systemd[1]: Reloading finished in 410 ms. Apr 17 23:57:13.814008 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 17 23:57:13.814362 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 17 23:57:13.814823 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:57:13.825212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:57:14.040353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:57:14.054852 (kubelet)[2344]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:57:14.169065 kubelet[2344]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:57:14.169065 kubelet[2344]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:57:14.169065 kubelet[2344]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:57:14.169824 kubelet[2344]: I0417 23:57:14.169236 2344 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:57:14.914813 kubelet[2344]: I0417 23:57:14.903313 2344 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 23:57:14.914813 kubelet[2344]: I0417 23:57:14.903381 2344 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:57:14.914813 kubelet[2344]: I0417 23:57:14.903761 2344 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:57:14.981351 kubelet[2344]: E0417 23:57:14.981081 2344 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.232.15.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.232.15.112:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:57:14.983747 kubelet[2344]: I0417 23:57:14.983704 2344 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:57:14.994060 kubelet[2344]: E0417 23:57:14.993997 2344 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:57:14.994060 kubelet[2344]: I0417 23:57:14.994039 2344 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 17 23:57:14.999494 kubelet[2344]: I0417 23:57:14.999459 2344 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 23:57:15.002507 kubelet[2344]: I0417 23:57:15.002440 2344 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:57:15.002761 kubelet[2344]: I0417 23:57:15.002479 2344 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-15-112","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 17 23:57:15.002930 kubelet[2344]: I0417 23:57:15.002826 2344 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:57:15.002930 kubelet[2344]: I0417 23:57:15.002838 2344 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 23:57:15.003171 kubelet[2344]: I0417 23:57:15.003152 2344 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:57:15.026717 kubelet[2344]: I0417 23:57:15.026649 2344 kubelet.go:480] "Attempting to sync node with API server" Apr 17 23:57:15.026717 kubelet[2344]: I0417 23:57:15.026699 2344 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:57:15.026907 kubelet[2344]: I0417 23:57:15.026791 2344 kubelet.go:386] "Adding apiserver pod source" Apr 17 23:57:15.026907 kubelet[2344]: I0417 23:57:15.026857 2344 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:57:15.058074 kubelet[2344]: E0417 23:57:15.057300 2344 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.232.15.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.232.15.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:57:15.058074 kubelet[2344]: E0417 23:57:15.057430 2344 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.232.15.112:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-15-112&limit=500&resourceVersion=0\": dial tcp 172.232.15.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:57:15.058074 kubelet[2344]: I0417 23:57:15.057601 2344 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:57:15.058349 kubelet[2344]: I0417 23:57:15.058259 2344 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:57:15.059498 kubelet[2344]: W0417 23:57:15.059454 2344 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 23:57:15.081951 kubelet[2344]: I0417 23:57:15.081728 2344 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 23:57:15.081951 kubelet[2344]: I0417 23:57:15.081804 2344 server.go:1289] "Started kubelet" Apr 17 23:57:15.082627 kubelet[2344]: I0417 23:57:15.082547 2344 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:57:15.083859 kubelet[2344]: I0417 23:57:15.083820 2344 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:57:15.091514 kubelet[2344]: I0417 23:57:15.091425 2344 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:57:15.094017 kubelet[2344]: I0417 23:57:15.092385 2344 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:57:15.094017 kubelet[2344]: E0417 23:57:15.092571 2344 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.232.15.112:6443/api/v1/namespaces/default/events\": dial tcp 172.232.15.112:6443: connect: connection refused" event="&Event{ObjectMeta:{172-232-15-112.18a74a44441a5a0b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-232-15-112,UID:172-232-15-112,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-232-15-112,},FirstTimestamp:2026-04-17 23:57:15.081759243 +0000 UTC m=+0.998541789,LastTimestamp:2026-04-17 23:57:15.081759243 +0000 UTC m=+0.998541789,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-232-15-112,}" Apr 17 23:57:15.096648 kubelet[2344]: I0417 23:57:15.096631 2344 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:57:15.099167 kubelet[2344]: I0417 23:57:15.099144 2344 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:57:15.101223 kubelet[2344]: I0417 23:57:15.101202 2344 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 23:57:15.101603 kubelet[2344]: E0417 23:57:15.101569 2344 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-232-15-112\" not found" Apr 17 23:57:15.101932 kubelet[2344]: I0417 23:57:15.101914 2344 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 23:57:15.102112 kubelet[2344]: I0417 23:57:15.102083 2344 reconciler.go:26] "Reconciler: start to sync state" Apr 17 23:57:15.104147 kubelet[2344]: E0417 23:57:15.103546 2344 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.232.15.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.232.15.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:57:15.105618 kubelet[2344]: E0417 23:57:15.105579 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.15.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-15-112?timeout=10s\": dial tcp 172.232.15.112:6443: connect: connection refused" interval="200ms" Apr 17 23:57:15.106386 kubelet[2344]: I0417 23:57:15.106354 2344 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:57:15.106512 kubelet[2344]: I0417 23:57:15.106464 2344 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:57:15.107102 kubelet[2344]: E0417 23:57:15.107072 2344 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:57:15.110010 kubelet[2344]: I0417 23:57:15.109985 2344 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:57:15.159573 kubelet[2344]: I0417 23:57:15.159542 2344 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:57:15.159915 kubelet[2344]: I0417 23:57:15.159798 2344 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:57:15.159915 kubelet[2344]: I0417 23:57:15.159859 2344 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:57:15.182083 kubelet[2344]: I0417 23:57:15.180569 2344 policy_none.go:49] "None policy: Start" Apr 17 23:57:15.182893 kubelet[2344]: I0417 23:57:15.182615 2344 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 23:57:15.182893 kubelet[2344]: I0417 23:57:15.182693 2344 state_mem.go:35] "Initializing new in-memory state store" Apr 17 23:57:15.197084 kubelet[2344]: E0417 23:57:15.196907 2344 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:57:15.197246 kubelet[2344]: I0417 23:57:15.197212 2344 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:57:15.197296 kubelet[2344]: I0417 23:57:15.197247 2344 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:57:15.197645 kubelet[2344]: I0417 23:57:15.197462 2344 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 23:57:15.200798 kubelet[2344]: I0417 23:57:15.200778 2344 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:57:15.201956 kubelet[2344]: I0417 23:57:15.201489 2344 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 23:57:15.201956 kubelet[2344]: I0417 23:57:15.201553 2344 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 23:57:15.201956 kubelet[2344]: I0417 23:57:15.201598 2344 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:57:15.201956 kubelet[2344]: I0417 23:57:15.201623 2344 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 23:57:15.201956 kubelet[2344]: E0417 23:57:15.201703 2344 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 17 23:57:15.207706 kubelet[2344]: E0417 23:57:15.207671 2344 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.232.15.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.232.15.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:57:15.211455 kubelet[2344]: E0417 23:57:15.211331 2344 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:57:15.211674 kubelet[2344]: E0417 23:57:15.211468 2344 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-232-15-112\" not found" Apr 17 23:57:15.300502 kubelet[2344]: I0417 23:57:15.300394 2344 kubelet_node_status.go:75] "Attempting to register node" node="172-232-15-112" Apr 17 23:57:15.301073 kubelet[2344]: E0417 23:57:15.301002 2344 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.15.112:6443/api/v1/nodes\": dial tcp 172.232.15.112:6443: connect: connection refused" node="172-232-15-112" Apr 17 23:57:15.306741 kubelet[2344]: E0417 23:57:15.306546 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.15.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-15-112?timeout=10s\": dial tcp 172.232.15.112:6443: connect: connection refused" interval="400ms" Apr 17 23:57:15.310622 kubelet[2344]: E0417 23:57:15.310547 2344 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-15-112\" not found" node="172-232-15-112" Apr 17 23:57:15.337418 kubelet[2344]: E0417 23:57:15.337254 2344 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-15-112\" not found" node="172-232-15-112" Apr 17 23:57:15.339485 kubelet[2344]: E0417 23:57:15.339442 2344 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-15-112\" not found" node="172-232-15-112" Apr 17 23:57:15.403833 kubelet[2344]: I0417 23:57:15.403762 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc0c53ba6830a061a54282caa5bd4666-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-15-112\" (UID: \"bc0c53ba6830a061a54282caa5bd4666\") " pod="kube-system/kube-apiserver-172-232-15-112" Apr 17 23:57:15.403833 kubelet[2344]: I0417 23:57:15.403831 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f819b6b0d3a0eb7f7296069d88bc79d8-ca-certs\") pod \"kube-controller-manager-172-232-15-112\" (UID: \"f819b6b0d3a0eb7f7296069d88bc79d8\") " pod="kube-system/kube-controller-manager-172-232-15-112" Apr 17 23:57:15.404085 kubelet[2344]: I0417 23:57:15.403856 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f819b6b0d3a0eb7f7296069d88bc79d8-flexvolume-dir\") pod \"kube-controller-manager-172-232-15-112\" (UID: \"f819b6b0d3a0eb7f7296069d88bc79d8\") " pod="kube-system/kube-controller-manager-172-232-15-112" Apr 17 23:57:15.404085 kubelet[2344]: I0417 23:57:15.403879 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f819b6b0d3a0eb7f7296069d88bc79d8-kubeconfig\") pod \"kube-controller-manager-172-232-15-112\" (UID: \"f819b6b0d3a0eb7f7296069d88bc79d8\") " pod="kube-system/kube-controller-manager-172-232-15-112" Apr 17 23:57:15.404085 kubelet[2344]: I0417 23:57:15.403941 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f819b6b0d3a0eb7f7296069d88bc79d8-k8s-certs\") pod \"kube-controller-manager-172-232-15-112\" (UID: \"f819b6b0d3a0eb7f7296069d88bc79d8\") " pod="kube-system/kube-controller-manager-172-232-15-112" Apr 17 23:57:15.404085 kubelet[2344]: I0417 23:57:15.403980 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f819b6b0d3a0eb7f7296069d88bc79d8-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-15-112\" (UID: \"f819b6b0d3a0eb7f7296069d88bc79d8\") " pod="kube-system/kube-controller-manager-172-232-15-112" Apr 17 23:57:15.404085 kubelet[2344]: I0417 23:57:15.404043 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e4e47616111459a22924f56dc3f0865-kubeconfig\") pod \"kube-scheduler-172-232-15-112\" (UID: \"6e4e47616111459a22924f56dc3f0865\") " pod="kube-system/kube-scheduler-172-232-15-112" Apr 17 23:57:15.404244 kubelet[2344]: I0417 23:57:15.404084 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc0c53ba6830a061a54282caa5bd4666-ca-certs\") pod \"kube-apiserver-172-232-15-112\" (UID: \"bc0c53ba6830a061a54282caa5bd4666\") " pod="kube-system/kube-apiserver-172-232-15-112" Apr 17 23:57:15.404244 kubelet[2344]: I0417 23:57:15.404175 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc0c53ba6830a061a54282caa5bd4666-k8s-certs\") pod \"kube-apiserver-172-232-15-112\" (UID: \"bc0c53ba6830a061a54282caa5bd4666\") " pod="kube-system/kube-apiserver-172-232-15-112" Apr 17 23:57:15.504340 kubelet[2344]: I0417 23:57:15.503815 2344 kubelet_node_status.go:75] "Attempting to register node" node="172-232-15-112" Apr 17 23:57:15.504340 kubelet[2344]: E0417 23:57:15.504257 2344 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.15.112:6443/api/v1/nodes\": dial tcp 172.232.15.112:6443: connect: connection refused" node="172-232-15-112" Apr 17 23:57:15.612428 kubelet[2344]: E0417 23:57:15.612364 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:15.613713 containerd[1571]: time="2026-04-17T23:57:15.613594786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-15-112,Uid:6e4e47616111459a22924f56dc3f0865,Namespace:kube-system,Attempt:0,}" Apr 17 23:57:15.639200 kubelet[2344]: E0417 23:57:15.639159 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:15.640177 containerd[1571]: time="2026-04-17T23:57:15.639882259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-15-112,Uid:bc0c53ba6830a061a54282caa5bd4666,Namespace:kube-system,Attempt:0,}" Apr 17 23:57:15.640239 kubelet[2344]: E0417 23:57:15.639957 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:15.641040 containerd[1571]: time="2026-04-17T23:57:15.640989581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-15-112,Uid:f819b6b0d3a0eb7f7296069d88bc79d8,Namespace:kube-system,Attempt:0,}" Apr 17 23:57:15.708372 kubelet[2344]: E0417 23:57:15.708257 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.15.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-15-112?timeout=10s\": dial tcp 172.232.15.112:6443: connect: connection refused" interval="800ms" Apr 17 23:57:15.907647 kubelet[2344]: I0417 23:57:15.907135 2344 kubelet_node_status.go:75] "Attempting to register node" node="172-232-15-112" Apr 17 23:57:15.907647 kubelet[2344]: E0417 23:57:15.907544 2344 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.15.112:6443/api/v1/nodes\": dial tcp 172.232.15.112:6443: connect: connection refused" node="172-232-15-112" Apr 17 23:57:16.029196 kubelet[2344]: E0417 23:57:16.029110 2344 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.232.15.112:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-15-112&limit=500&resourceVersion=0\": dial tcp 172.232.15.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:57:16.203705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2922040558.mount: Deactivated successfully. Apr 17 23:57:16.208462 containerd[1571]: time="2026-04-17T23:57:16.208381914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:57:16.209437 containerd[1571]: time="2026-04-17T23:57:16.209396819Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312062" Apr 17 23:57:16.210402 containerd[1571]: time="2026-04-17T23:57:16.210364147Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:57:16.211514 containerd[1571]: time="2026-04-17T23:57:16.211402840Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:57:16.212158 containerd[1571]: time="2026-04-17T23:57:16.212135166Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:57:16.212498 containerd[1571]: time="2026-04-17T23:57:16.212457112Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:57:16.217853 containerd[1571]: time="2026-04-17T23:57:16.217812015Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:57:16.220448 containerd[1571]: time="2026-04-17T23:57:16.219967056Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 579.949258ms" Apr 17 23:57:16.222447 containerd[1571]: time="2026-04-17T23:57:16.222399456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:57:16.223892 containerd[1571]: time="2026-04-17T23:57:16.223800742Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 582.706708ms" Apr 17 23:57:16.224853 containerd[1571]: time="2026-04-17T23:57:16.224807157Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 610.933223ms" Apr 17 23:57:16.404434 kubelet[2344]: E0417 23:57:16.404367 2344 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.232.15.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.232.15.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:57:16.526325 kubelet[2344]: E0417 23:57:16.526173 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.15.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-15-112?timeout=10s\": dial tcp 172.232.15.112:6443: connect: connection refused" interval="1.6s" Apr 17 23:57:16.674088 kubelet[2344]: E0417 23:57:16.673928 2344 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.232.15.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.232.15.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:57:16.717908 kubelet[2344]: I0417 23:57:16.717264 2344 kubelet_node_status.go:75] "Attempting to register node" node="172-232-15-112" Apr 17 23:57:16.717908 kubelet[2344]: E0417 23:57:16.717857 2344 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.15.112:6443/api/v1/nodes\": dial tcp 172.232.15.112:6443: connect: connection refused" node="172-232-15-112" Apr 17 23:57:16.748587 containerd[1571]: time="2026-04-17T23:57:16.748455143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:57:16.749702 containerd[1571]: time="2026-04-17T23:57:16.749245725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:57:16.749702 containerd[1571]: time="2026-04-17T23:57:16.749307260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:57:16.749702 containerd[1571]: time="2026-04-17T23:57:16.749501656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:57:16.750681 containerd[1571]: time="2026-04-17T23:57:16.750439686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:57:16.750681 containerd[1571]: time="2026-04-17T23:57:16.750483623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:57:16.750681 containerd[1571]: time="2026-04-17T23:57:16.750497912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:57:16.750681 containerd[1571]: time="2026-04-17T23:57:16.750588785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:57:16.797794 containerd[1571]: time="2026-04-17T23:57:16.789659111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:57:16.797794 containerd[1571]: time="2026-04-17T23:57:16.789724286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:57:16.797794 containerd[1571]: time="2026-04-17T23:57:16.789740265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:57:16.797794 containerd[1571]: time="2026-04-17T23:57:16.789824309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:57:16.799481 kubelet[2344]: E0417 23:57:16.799447 2344 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.232.15.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.232.15.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:57:17.043164 kubelet[2344]: E0417 23:57:17.041342 2344 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.232.15.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.232.15.112:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:57:17.068706 kubelet[2344]: E0417 23:57:17.064105 2344 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.232.15.112:6443/api/v1/namespaces/default/events\": dial tcp 172.232.15.112:6443: connect: connection refused" event="&Event{ObjectMeta:{172-232-15-112.18a74a44441a5a0b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-232-15-112,UID:172-232-15-112,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-232-15-112,},FirstTimestamp:2026-04-17 23:57:15.081759243 +0000 UTC m=+0.998541789,LastTimestamp:2026-04-17 23:57:15.081759243 +0000 UTC m=+0.998541789,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-232-15-112,}" Apr 17 23:57:17.228985 containerd[1571]: time="2026-04-17T23:57:17.228872567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-15-112,Uid:bc0c53ba6830a061a54282caa5bd4666,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d469784d15c9df5e2bdede64c909acdc5565a8c4f9ad341f39b9f63cd451c32\"" Apr 17 23:57:17.236347 kubelet[2344]: E0417 23:57:17.236294 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:17.248307 containerd[1571]: time="2026-04-17T23:57:17.248252141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-15-112,Uid:f819b6b0d3a0eb7f7296069d88bc79d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"733218a270c6576db722bb2ce15b6b34a40cad0f68da8405dc99cd0451851d06\"" Apr 17 23:57:17.251755 kubelet[2344]: E0417 23:57:17.251731 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:17.252333 containerd[1571]: time="2026-04-17T23:57:17.250902657Z" level=info msg="CreateContainer within sandbox \"5d469784d15c9df5e2bdede64c909acdc5565a8c4f9ad341f39b9f63cd451c32\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 23:57:17.254557 containerd[1571]: time="2026-04-17T23:57:17.254526016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-15-112,Uid:6e4e47616111459a22924f56dc3f0865,Namespace:kube-system,Attempt:0,} returns sandbox id \"6849554b4fa822f452aa585f95df8719dc9eec1b3e49234eb3f405307b033448\"" Apr 17 23:57:17.257029 kubelet[2344]: E0417 23:57:17.256999 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:17.257938 containerd[1571]: time="2026-04-17T23:57:17.257915230Z" level=info msg="CreateContainer within sandbox \"733218a270c6576db722bb2ce15b6b34a40cad0f68da8405dc99cd0451851d06\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 23:57:17.265510 containerd[1571]: time="2026-04-17T23:57:17.263360072Z" level=info msg="CreateContainer within sandbox \"6849554b4fa822f452aa585f95df8719dc9eec1b3e49234eb3f405307b033448\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 23:57:17.276315 containerd[1571]: time="2026-04-17T23:57:17.275916640Z" level=info msg="CreateContainer within sandbox \"733218a270c6576db722bb2ce15b6b34a40cad0f68da8405dc99cd0451851d06\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b2e387e661c1204d42ad241ddd05c5c8a2ed1d88aee6840fe688a59d14f98186\"" Apr 17 23:57:17.277012 containerd[1571]: time="2026-04-17T23:57:17.276981446Z" level=info msg="StartContainer for \"b2e387e661c1204d42ad241ddd05c5c8a2ed1d88aee6840fe688a59d14f98186\"" Apr 17 23:57:17.280856 containerd[1571]: time="2026-04-17T23:57:17.280742415Z" level=info msg="CreateContainer within sandbox \"5d469784d15c9df5e2bdede64c909acdc5565a8c4f9ad341f39b9f63cd451c32\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2cecc33580bcbf4040062e4869fe2cfaf4d69ca8025cb29c9d89dfa325773db9\"" Apr 17 23:57:17.281240 containerd[1571]: time="2026-04-17T23:57:17.281214092Z" level=info msg="StartContainer for \"2cecc33580bcbf4040062e4869fe2cfaf4d69ca8025cb29c9d89dfa325773db9\"" Apr 17 23:57:17.300293 containerd[1571]: time="2026-04-17T23:57:17.300149557Z" level=info msg="CreateContainer within sandbox \"6849554b4fa822f452aa585f95df8719dc9eec1b3e49234eb3f405307b033448\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f922b038cfc1afd8d8b0eb8be0a52f080f6593524ca0a2ff8d299b50999b3e5b\"" Apr 17 23:57:17.301179 containerd[1571]: time="2026-04-17T23:57:17.300847648Z" level=info msg="StartContainer for \"f922b038cfc1afd8d8b0eb8be0a52f080f6593524ca0a2ff8d299b50999b3e5b\"" Apr 17 23:57:17.467153 containerd[1571]: time="2026-04-17T23:57:17.466604746Z" level=info msg="StartContainer for \"2cecc33580bcbf4040062e4869fe2cfaf4d69ca8025cb29c9d89dfa325773db9\" returns successfully" Apr 17 23:57:17.503096 containerd[1571]: time="2026-04-17T23:57:17.502741416Z" level=info msg="StartContainer for \"b2e387e661c1204d42ad241ddd05c5c8a2ed1d88aee6840fe688a59d14f98186\" returns successfully" Apr 17 23:57:17.503696 containerd[1571]: time="2026-04-17T23:57:17.503404970Z" level=info msg="StartContainer for \"f922b038cfc1afd8d8b0eb8be0a52f080f6593524ca0a2ff8d299b50999b3e5b\" returns successfully" Apr 17 23:57:18.240436 kubelet[2344]: E0417 23:57:18.239384 2344 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-15-112\" not found" node="172-232-15-112" Apr 17 23:57:18.240436 kubelet[2344]: E0417 23:57:18.239899 2344 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-15-112\" not found" node="172-232-15-112" Apr 17 23:57:18.240436 kubelet[2344]: E0417 23:57:18.239951 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:18.240436 kubelet[2344]: E0417 23:57:18.240176 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:18.245595 kubelet[2344]: E0417 23:57:18.245086 2344 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-15-112\" not found" node="172-232-15-112" Apr 17 23:57:18.245595 kubelet[2344]: E0417 23:57:18.245327 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:18.445163 kubelet[2344]: I0417 23:57:18.444177 2344 kubelet_node_status.go:75] "Attempting to register node" node="172-232-15-112" Apr 17 23:57:19.284822 kubelet[2344]: E0417 23:57:19.284756 2344 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-15-112\" not found" node="172-232-15-112" Apr 17 23:57:19.286685 kubelet[2344]: E0417 23:57:19.285902 2344 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-15-112\" not found" node="172-232-15-112" Apr 17 23:57:19.286685 kubelet[2344]: E0417 23:57:19.286371 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:19.286685 kubelet[2344]: E0417 23:57:19.286625 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:20.313232 kubelet[2344]: E0417 23:57:20.311851 2344 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-15-112\" not found" node="172-232-15-112" Apr 17 23:57:20.313232 kubelet[2344]: E0417 23:57:20.312047 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:20.313232 kubelet[2344]: E0417 23:57:20.312422 2344 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-232-15-112\" not found" node="172-232-15-112" Apr 17 23:57:20.376409 kubelet[2344]: I0417 23:57:20.376154 2344 kubelet_node_status.go:78] "Successfully registered node" node="172-232-15-112" Apr 17 23:57:20.376409 kubelet[2344]: E0417 23:57:20.376216 2344 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-232-15-112\": node \"172-232-15-112\" not found" Apr 17 23:57:20.401912 kubelet[2344]: I0417 23:57:20.401884 2344 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-232-15-112" Apr 17 23:57:20.411432 kubelet[2344]: E0417 23:57:20.411159 2344 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-232-15-112\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-232-15-112" Apr 17 23:57:20.411432 kubelet[2344]: I0417 23:57:20.411193 2344 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-15-112" Apr 17 23:57:20.413509 kubelet[2344]: E0417 23:57:20.413278 2344 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-232-15-112\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-232-15-112" Apr 17 23:57:20.413509 kubelet[2344]: I0417 23:57:20.413317 2344 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-232-15-112" Apr 17 23:57:20.414744 kubelet[2344]: E0417 23:57:20.414723 2344 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-232-15-112\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-232-15-112" Apr 17 23:57:21.054610 kubelet[2344]: I0417 23:57:21.054550 2344 apiserver.go:52] "Watching apiserver" Apr 17 23:57:21.103527 kubelet[2344]: I0417 23:57:21.103465 2344 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 23:57:21.147279 update_engine[1553]: I20260417 23:57:21.147204 1553 update_attempter.cc:509] Updating boot flags... Apr 17 23:57:21.334900 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (2631) Apr 17 23:57:21.562640 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (2632) Apr 17 23:57:22.537342 systemd[1]: Reloading requested from client PID 2641 ('systemctl') (unit session-7.scope)... Apr 17 23:57:22.537357 systemd[1]: Reloading... Apr 17 23:57:22.649193 zram_generator::config[2678]: No configuration found. Apr 17 23:57:22.921789 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:57:23.001067 systemd[1]: Reloading finished in 463 ms. Apr 17 23:57:23.044913 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:57:23.055419 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:57:23.055874 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:57:23.061478 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:57:23.330655 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:57:23.341823 (kubelet)[2741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:57:23.409360 kubelet[2741]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:57:23.409953 kubelet[2741]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:57:23.410020 kubelet[2741]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:57:23.410376 kubelet[2741]: I0417 23:57:23.410342 2741 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:57:23.418731 kubelet[2741]: I0417 23:57:23.418703 2741 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 23:57:23.418832 kubelet[2741]: I0417 23:57:23.418819 2741 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:57:23.419329 kubelet[2741]: I0417 23:57:23.419086 2741 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:57:23.420893 kubelet[2741]: I0417 23:57:23.420877 2741 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 23:57:23.425161 kubelet[2741]: I0417 23:57:23.424708 2741 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:57:23.430250 kubelet[2741]: E0417 23:57:23.430212 2741 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:57:23.430250 kubelet[2741]: I0417 23:57:23.430242 2741 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 17 23:57:23.438803 kubelet[2741]: I0417 23:57:23.437416 2741 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 23:57:23.438803 kubelet[2741]: I0417 23:57:23.438738 2741 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:57:23.439304 kubelet[2741]: I0417 23:57:23.438782 2741 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-15-112","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 17 23:57:23.439487 kubelet[2741]: I0417 23:57:23.439473 2741 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:57:23.439564 kubelet[2741]: I0417 23:57:23.439554 2741 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 23:57:23.439718 kubelet[2741]: I0417 23:57:23.439699 2741 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:57:23.440016 kubelet[2741]: I0417 23:57:23.440004 2741 kubelet.go:480] "Attempting to sync node with API server" Apr 17 23:57:23.440956 kubelet[2741]: I0417 23:57:23.440942 2741 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:57:23.441067 kubelet[2741]: I0417 23:57:23.441057 2741 kubelet.go:386] "Adding apiserver pod source" Apr 17 23:57:23.441169 kubelet[2741]: I0417 23:57:23.441157 2741 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:57:23.443962 kubelet[2741]: I0417 23:57:23.443943 2741 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:57:23.444742 kubelet[2741]: I0417 23:57:23.444724 2741 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:57:23.448390 kubelet[2741]: I0417 23:57:23.448376 2741 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 23:57:23.448627 kubelet[2741]: I0417 23:57:23.448615 2741 server.go:1289] "Started kubelet" Apr 17 23:57:23.455061 kubelet[2741]: I0417 23:57:23.455037 2741 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:57:23.462373 kubelet[2741]: I0417 23:57:23.462336 2741 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:57:23.464020 kubelet[2741]: I0417 23:57:23.463850 2741 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:57:23.466288 kubelet[2741]: I0417 23:57:23.466270 2741 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:57:23.467027 kubelet[2741]: I0417 23:57:23.467012 2741 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:57:23.469779 kubelet[2741]: I0417 23:57:23.469753 2741 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:57:23.493293 kubelet[2741]: I0417 23:57:23.493267 2741 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:57:23.493540 kubelet[2741]: I0417 23:57:23.493521 2741 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:57:23.496648 kubelet[2741]: I0417 23:57:23.496633 2741 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 23:57:23.499599 kubelet[2741]: I0417 23:57:23.499586 2741 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 23:57:23.499835 kubelet[2741]: I0417 23:57:23.499823 2741 reconciler.go:26] "Reconciler: start to sync state" Apr 17 23:57:23.500669 kubelet[2741]: E0417 23:57:23.500526 2741 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:57:23.503670 kubelet[2741]: I0417 23:57:23.503649 2741 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:57:23.507274 kubelet[2741]: I0417 23:57:23.507253 2741 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 23:57:23.508882 kubelet[2741]: I0417 23:57:23.508858 2741 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 23:57:23.508950 kubelet[2741]: I0417 23:57:23.508901 2741 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 23:57:23.508950 kubelet[2741]: I0417 23:57:23.508928 2741 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:57:23.509001 kubelet[2741]: I0417 23:57:23.508952 2741 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 23:57:23.509034 kubelet[2741]: E0417 23:57:23.509006 2741 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:57:23.596981 kubelet[2741]: I0417 23:57:23.596858 2741 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:57:23.599022 kubelet[2741]: I0417 23:57:23.597200 2741 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:57:23.599022 kubelet[2741]: I0417 23:57:23.597241 2741 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:57:23.599022 kubelet[2741]: I0417 23:57:23.597433 2741 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 17 23:57:23.599022 kubelet[2741]: I0417 23:57:23.597987 2741 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 17 23:57:23.599022 kubelet[2741]: I0417 23:57:23.598030 2741 policy_none.go:49] "None policy: Start" Apr 17 23:57:23.599022 kubelet[2741]: I0417 23:57:23.598058 2741 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 23:57:23.599022 kubelet[2741]: I0417 23:57:23.598089 2741 state_mem.go:35] "Initializing new in-memory state store" Apr 17 23:57:23.599022 kubelet[2741]: I0417 23:57:23.598220 2741 state_mem.go:75] "Updated machine memory state" Apr 17 23:57:23.603003 kubelet[2741]: E0417 23:57:23.601660 2741 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:57:23.603003 kubelet[2741]: I0417 23:57:23.601969 2741 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:57:23.603003 kubelet[2741]: I0417 23:57:23.601989 2741 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:57:23.603258 kubelet[2741]: I0417 23:57:23.603220 2741 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:57:23.606073 kubelet[2741]: E0417 23:57:23.606050 2741 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:57:23.610033 kubelet[2741]: I0417 23:57:23.610009 2741 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-15-112" Apr 17 23:57:23.611378 kubelet[2741]: I0417 23:57:23.611355 2741 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-232-15-112" Apr 17 23:57:23.613196 kubelet[2741]: I0417 23:57:23.612569 2741 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-232-15-112" Apr 17 23:57:23.712718 kubelet[2741]: I0417 23:57:23.712670 2741 kubelet_node_status.go:75] "Attempting to register node" node="172-232-15-112" Apr 17 23:57:23.722134 kubelet[2741]: I0417 23:57:23.722078 2741 kubelet_node_status.go:124] "Node was previously registered" node="172-232-15-112" Apr 17 23:57:23.722249 kubelet[2741]: I0417 23:57:23.722222 2741 kubelet_node_status.go:78] "Successfully registered node" node="172-232-15-112" Apr 17 23:57:23.802170 kubelet[2741]: I0417 23:57:23.801612 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f819b6b0d3a0eb7f7296069d88bc79d8-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-15-112\" (UID: \"f819b6b0d3a0eb7f7296069d88bc79d8\") " pod="kube-system/kube-controller-manager-172-232-15-112" Apr 17 23:57:23.802170 kubelet[2741]: I0417 23:57:23.801669 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc0c53ba6830a061a54282caa5bd4666-ca-certs\") pod \"kube-apiserver-172-232-15-112\" (UID: \"bc0c53ba6830a061a54282caa5bd4666\") " pod="kube-system/kube-apiserver-172-232-15-112" Apr 17 23:57:23.802170 kubelet[2741]: I0417 23:57:23.801696 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f819b6b0d3a0eb7f7296069d88bc79d8-ca-certs\") pod \"kube-controller-manager-172-232-15-112\" (UID: \"f819b6b0d3a0eb7f7296069d88bc79d8\") " pod="kube-system/kube-controller-manager-172-232-15-112" Apr 17 23:57:23.802170 kubelet[2741]: I0417 23:57:23.801712 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f819b6b0d3a0eb7f7296069d88bc79d8-kubeconfig\") pod \"kube-controller-manager-172-232-15-112\" (UID: \"f819b6b0d3a0eb7f7296069d88bc79d8\") " pod="kube-system/kube-controller-manager-172-232-15-112" Apr 17 23:57:23.802170 kubelet[2741]: I0417 23:57:23.801729 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e4e47616111459a22924f56dc3f0865-kubeconfig\") pod \"kube-scheduler-172-232-15-112\" (UID: \"6e4e47616111459a22924f56dc3f0865\") " pod="kube-system/kube-scheduler-172-232-15-112" Apr 17 23:57:23.802408 kubelet[2741]: I0417 23:57:23.801743 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc0c53ba6830a061a54282caa5bd4666-k8s-certs\") pod \"kube-apiserver-172-232-15-112\" (UID: \"bc0c53ba6830a061a54282caa5bd4666\") " pod="kube-system/kube-apiserver-172-232-15-112" Apr 17 23:57:23.802408 kubelet[2741]: I0417 23:57:23.801757 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc0c53ba6830a061a54282caa5bd4666-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-15-112\" (UID: \"bc0c53ba6830a061a54282caa5bd4666\") " pod="kube-system/kube-apiserver-172-232-15-112" Apr 17 23:57:23.802408 kubelet[2741]: I0417 23:57:23.801776 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f819b6b0d3a0eb7f7296069d88bc79d8-flexvolume-dir\") pod \"kube-controller-manager-172-232-15-112\" (UID: \"f819b6b0d3a0eb7f7296069d88bc79d8\") " pod="kube-system/kube-controller-manager-172-232-15-112" Apr 17 23:57:23.802408 kubelet[2741]: I0417 23:57:23.801791 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f819b6b0d3a0eb7f7296069d88bc79d8-k8s-certs\") pod \"kube-controller-manager-172-232-15-112\" (UID: \"f819b6b0d3a0eb7f7296069d88bc79d8\") " pod="kube-system/kube-controller-manager-172-232-15-112" Apr 17 23:57:23.920145 kubelet[2741]: E0417 23:57:23.918225 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:23.920145 kubelet[2741]: E0417 23:57:23.919263 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:23.941168 kubelet[2741]: E0417 23:57:23.940543 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:24.449182 kubelet[2741]: I0417 23:57:24.449132 2741 apiserver.go:52] "Watching apiserver" Apr 17 23:57:24.506246 kubelet[2741]: I0417 23:57:24.500327 2741 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 23:57:24.549404 kubelet[2741]: I0417 23:57:24.549363 2741 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-15-112" Apr 17 23:57:24.549924 kubelet[2741]: I0417 23:57:24.549908 2741 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-232-15-112" Apr 17 23:57:24.552729 kubelet[2741]: E0417 23:57:24.552659 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:24.562609 kubelet[2741]: E0417 23:57:24.562320 2741 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-232-15-112\" already exists" pod="kube-system/kube-scheduler-172-232-15-112" Apr 17 23:57:24.562609 kubelet[2741]: E0417 23:57:24.562580 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:24.562773 kubelet[2741]: E0417 23:57:24.562679 2741 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-232-15-112\" already exists" pod="kube-system/kube-apiserver-172-232-15-112" Apr 17 23:57:24.562773 kubelet[2741]: E0417 23:57:24.562761 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:24.595805 kubelet[2741]: I0417 23:57:24.594350 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-232-15-112" podStartSLOduration=1.59432532 podStartE2EDuration="1.59432532s" podCreationTimestamp="2026-04-17 23:57:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:57:24.591063384 +0000 UTC m=+1.237311937" watchObservedRunningTime="2026-04-17 23:57:24.59432532 +0000 UTC m=+1.240573863" Apr 17 23:57:24.603683 kubelet[2741]: I0417 23:57:24.603623 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-232-15-112" podStartSLOduration=1.60360571 podStartE2EDuration="1.60360571s" podCreationTimestamp="2026-04-17 23:57:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:57:24.603551482 +0000 UTC m=+1.249800035" watchObservedRunningTime="2026-04-17 23:57:24.60360571 +0000 UTC m=+1.249854253" Apr 17 23:57:24.614198 kubelet[2741]: I0417 23:57:24.613944 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-232-15-112" podStartSLOduration=1.613925324 podStartE2EDuration="1.613925324s" podCreationTimestamp="2026-04-17 23:57:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:57:24.613440035 +0000 UTC m=+1.259688578" watchObservedRunningTime="2026-04-17 23:57:24.613925324 +0000 UTC m=+1.260173867" Apr 17 23:57:25.553112 kubelet[2741]: E0417 23:57:25.551774 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:25.553112 kubelet[2741]: E0417 23:57:25.551808 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:26.553828 kubelet[2741]: E0417 23:57:26.553766 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:28.760485 kubelet[2741]: I0417 23:57:28.760416 2741 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 23:57:28.761976 containerd[1571]: time="2026-04-17T23:57:28.761731031Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 23:57:28.763025 kubelet[2741]: I0417 23:57:28.762243 2741 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 23:57:29.520692 kubelet[2741]: I0417 23:57:29.520628 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b164e70e-a0d6-4565-bf63-2b7e0088f1db-kube-proxy\") pod \"kube-proxy-nh94g\" (UID: \"b164e70e-a0d6-4565-bf63-2b7e0088f1db\") " pod="kube-system/kube-proxy-nh94g" Apr 17 23:57:29.520692 kubelet[2741]: I0417 23:57:29.520687 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b164e70e-a0d6-4565-bf63-2b7e0088f1db-xtables-lock\") pod \"kube-proxy-nh94g\" (UID: \"b164e70e-a0d6-4565-bf63-2b7e0088f1db\") " pod="kube-system/kube-proxy-nh94g" Apr 17 23:57:29.520868 kubelet[2741]: I0417 23:57:29.520709 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzxfj\" (UniqueName: \"kubernetes.io/projected/b164e70e-a0d6-4565-bf63-2b7e0088f1db-kube-api-access-mzxfj\") pod \"kube-proxy-nh94g\" (UID: \"b164e70e-a0d6-4565-bf63-2b7e0088f1db\") " pod="kube-system/kube-proxy-nh94g" Apr 17 23:57:29.520868 kubelet[2741]: I0417 23:57:29.520729 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b164e70e-a0d6-4565-bf63-2b7e0088f1db-lib-modules\") pod \"kube-proxy-nh94g\" (UID: \"b164e70e-a0d6-4565-bf63-2b7e0088f1db\") " pod="kube-system/kube-proxy-nh94g" Apr 17 23:57:29.706878 kubelet[2741]: E0417 23:57:29.706817 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:29.708066 containerd[1571]: time="2026-04-17T23:57:29.707921503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nh94g,Uid:b164e70e-a0d6-4565-bf63-2b7e0088f1db,Namespace:kube-system,Attempt:0,}" Apr 17 23:57:29.961113 containerd[1571]: time="2026-04-17T23:57:29.960892524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:57:29.962587 containerd[1571]: time="2026-04-17T23:57:29.962052567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:57:29.962587 containerd[1571]: time="2026-04-17T23:57:29.962250171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:57:29.963215 containerd[1571]: time="2026-04-17T23:57:29.962846422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:57:30.223078 containerd[1571]: time="2026-04-17T23:57:30.222923602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nh94g,Uid:b164e70e-a0d6-4565-bf63-2b7e0088f1db,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a18372c8043afd0587e6a36e8c263c6ef607aa4da0bddcfb7d7bd8fcddfad80\"" Apr 17 23:57:30.224336 kubelet[2741]: I0417 23:57:30.224256 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8976295d-89c2-4588-a8f3-a290386ca274-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-fck5l\" (UID: \"8976295d-89c2-4588-a8f3-a290386ca274\") " pod="tigera-operator/tigera-operator-6bf85f8dd-fck5l" Apr 17 23:57:30.224751 kubelet[2741]: I0417 23:57:30.224381 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhxvf\" (UniqueName: \"kubernetes.io/projected/8976295d-89c2-4588-a8f3-a290386ca274-kube-api-access-qhxvf\") pod \"tigera-operator-6bf85f8dd-fck5l\" (UID: \"8976295d-89c2-4588-a8f3-a290386ca274\") " pod="tigera-operator/tigera-operator-6bf85f8dd-fck5l" Apr 17 23:57:30.225533 kubelet[2741]: E0417 23:57:30.225508 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:30.232699 containerd[1571]: time="2026-04-17T23:57:30.232642310Z" level=info msg="CreateContainer within sandbox \"9a18372c8043afd0587e6a36e8c263c6ef607aa4da0bddcfb7d7bd8fcddfad80\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 23:57:30.253794 containerd[1571]: time="2026-04-17T23:57:30.253634990Z" level=info msg="CreateContainer within sandbox \"9a18372c8043afd0587e6a36e8c263c6ef607aa4da0bddcfb7d7bd8fcddfad80\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"61a933003e72f84e0047a406e22ba261a61694889bb30214b1ad2e1fb8c3a277\"" Apr 17 23:57:30.258432 containerd[1571]: time="2026-04-17T23:57:30.258361788Z" level=info msg="StartContainer for \"61a933003e72f84e0047a406e22ba261a61694889bb30214b1ad2e1fb8c3a277\"" Apr 17 23:57:30.357666 containerd[1571]: time="2026-04-17T23:57:30.356722486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-fck5l,Uid:8976295d-89c2-4588-a8f3-a290386ca274,Namespace:tigera-operator,Attempt:0,}" Apr 17 23:57:30.365188 kubelet[2741]: E0417 23:57:30.363404 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:30.441685 containerd[1571]: time="2026-04-17T23:57:30.441545420Z" level=info msg="StartContainer for \"61a933003e72f84e0047a406e22ba261a61694889bb30214b1ad2e1fb8c3a277\" returns successfully" Apr 17 23:57:30.442860 containerd[1571]: time="2026-04-17T23:57:30.442773813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:57:30.443069 containerd[1571]: time="2026-04-17T23:57:30.442999337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:57:30.443069 containerd[1571]: time="2026-04-17T23:57:30.443032586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:57:30.443316 containerd[1571]: time="2026-04-17T23:57:30.443272458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:57:30.567194 kubelet[2741]: E0417 23:57:30.567049 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:30.568325 kubelet[2741]: E0417 23:57:30.568063 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:30.860377 containerd[1571]: time="2026-04-17T23:57:30.859814197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-fck5l,Uid:8976295d-89c2-4588-a8f3-a290386ca274,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"be13d78f69babae5ce365037a67c05ee004c73e45012250fc4474217d4eac6ce\"" Apr 17 23:57:30.862974 containerd[1571]: time="2026-04-17T23:57:30.862863026Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 17 23:57:31.623458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2733510244.mount: Deactivated successfully. Apr 17 23:57:33.154835 kubelet[2741]: E0417 23:57:33.154752 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:33.191769 kubelet[2741]: I0417 23:57:33.189092 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nh94g" podStartSLOduration=4.18882246 podStartE2EDuration="4.18882246s" podCreationTimestamp="2026-04-17 23:57:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:57:30.642723704 +0000 UTC m=+7.288972257" watchObservedRunningTime="2026-04-17 23:57:33.18882246 +0000 UTC m=+9.835071023" Apr 17 23:57:33.589393 kubelet[2741]: E0417 23:57:33.588366 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:35.169812 containerd[1571]: time="2026-04-17T23:57:35.169698225Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:35.171710 containerd[1571]: time="2026-04-17T23:57:35.171617264Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 17 23:57:35.172451 containerd[1571]: time="2026-04-17T23:57:35.171971176Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:35.178044 containerd[1571]: time="2026-04-17T23:57:35.178004335Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:35.178896 containerd[1571]: time="2026-04-17T23:57:35.178847077Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 4.315925812s" Apr 17 23:57:35.178948 containerd[1571]: time="2026-04-17T23:57:35.178935225Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 17 23:57:35.188777 containerd[1571]: time="2026-04-17T23:57:35.188309131Z" level=info msg="CreateContainer within sandbox \"be13d78f69babae5ce365037a67c05ee004c73e45012250fc4474217d4eac6ce\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 17 23:57:35.206563 containerd[1571]: time="2026-04-17T23:57:35.206531075Z" level=info msg="CreateContainer within sandbox \"be13d78f69babae5ce365037a67c05ee004c73e45012250fc4474217d4eac6ce\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b1c0af553db2341d805c1ec74c354c26e9c24daeb4d14d777015e3b4dd07c3a5\"" Apr 17 23:57:35.207856 containerd[1571]: time="2026-04-17T23:57:35.207816647Z" level=info msg="StartContainer for \"b1c0af553db2341d805c1ec74c354c26e9c24daeb4d14d777015e3b4dd07c3a5\"" Apr 17 23:57:35.348515 systemd[1]: run-containerd-runc-k8s.io-b1c0af553db2341d805c1ec74c354c26e9c24daeb4d14d777015e3b4dd07c3a5-runc.bwnflN.mount: Deactivated successfully. Apr 17 23:57:35.462714 containerd[1571]: time="2026-04-17T23:57:35.462602769Z" level=info msg="StartContainer for \"b1c0af553db2341d805c1ec74c354c26e9c24daeb4d14d777015e3b4dd07c3a5\" returns successfully" Apr 17 23:57:35.636379 kubelet[2741]: I0417 23:57:35.614746 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-fck5l" podStartSLOduration=2.295307384 podStartE2EDuration="6.614722893s" podCreationTimestamp="2026-04-17 23:57:29 +0000 UTC" firstStartedPulling="2026-04-17 23:57:30.862007971 +0000 UTC m=+7.508256514" lastFinishedPulling="2026-04-17 23:57:35.18142347 +0000 UTC m=+11.827672023" observedRunningTime="2026-04-17 23:57:35.614570706 +0000 UTC m=+12.260819269" watchObservedRunningTime="2026-04-17 23:57:35.614722893 +0000 UTC m=+12.260971436" Apr 17 23:57:36.179886 kubelet[2741]: E0417 23:57:36.179063 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:36.595706 kubelet[2741]: E0417 23:57:36.595606 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:44.990287 sudo[1817]: pam_unix(sudo:session): session closed for user root Apr 17 23:57:45.090717 sshd[1813]: pam_unix(sshd:session): session closed for user core Apr 17 23:57:45.102598 systemd[1]: sshd@6-172.232.15.112:22-50.85.169.122:38270.service: Deactivated successfully. Apr 17 23:57:45.119194 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 23:57:45.126224 systemd-logind[1550]: Session 7 logged out. Waiting for processes to exit. Apr 17 23:57:45.142604 systemd-logind[1550]: Removed session 7. Apr 17 23:57:46.356223 kubelet[2741]: I0417 23:57:46.356076 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwg7c\" (UniqueName: \"kubernetes.io/projected/00e2c3e5-50e0-4c20-aa58-7d14d2d05fc9-kube-api-access-rwg7c\") pod \"calico-typha-59d9f4cf57-ppn79\" (UID: \"00e2c3e5-50e0-4c20-aa58-7d14d2d05fc9\") " pod="calico-system/calico-typha-59d9f4cf57-ppn79" Apr 17 23:57:46.356223 kubelet[2741]: I0417 23:57:46.356226 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00e2c3e5-50e0-4c20-aa58-7d14d2d05fc9-tigera-ca-bundle\") pod \"calico-typha-59d9f4cf57-ppn79\" (UID: \"00e2c3e5-50e0-4c20-aa58-7d14d2d05fc9\") " pod="calico-system/calico-typha-59d9f4cf57-ppn79" Apr 17 23:57:46.357208 kubelet[2741]: I0417 23:57:46.356281 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/00e2c3e5-50e0-4c20-aa58-7d14d2d05fc9-typha-certs\") pod \"calico-typha-59d9f4cf57-ppn79\" (UID: \"00e2c3e5-50e0-4c20-aa58-7d14d2d05fc9\") " pod="calico-system/calico-typha-59d9f4cf57-ppn79" Apr 17 23:57:46.459462 kubelet[2741]: I0417 23:57:46.457063 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/85903b85-09bf-4674-9d7b-af054fd8867a-var-lib-calico\") pod \"calico-node-cb58j\" (UID: \"85903b85-09bf-4674-9d7b-af054fd8867a\") " pod="calico-system/calico-node-cb58j" Apr 17 23:57:46.459462 kubelet[2741]: I0417 23:57:46.457220 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/85903b85-09bf-4674-9d7b-af054fd8867a-cni-log-dir\") pod \"calico-node-cb58j\" (UID: \"85903b85-09bf-4674-9d7b-af054fd8867a\") " pod="calico-system/calico-node-cb58j" Apr 17 23:57:46.459462 kubelet[2741]: I0417 23:57:46.457261 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/85903b85-09bf-4674-9d7b-af054fd8867a-cni-net-dir\") pod \"calico-node-cb58j\" (UID: \"85903b85-09bf-4674-9d7b-af054fd8867a\") " pod="calico-system/calico-node-cb58j" Apr 17 23:57:46.459462 kubelet[2741]: I0417 23:57:46.457291 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6c66\" (UniqueName: \"kubernetes.io/projected/85903b85-09bf-4674-9d7b-af054fd8867a-kube-api-access-f6c66\") pod \"calico-node-cb58j\" (UID: \"85903b85-09bf-4674-9d7b-af054fd8867a\") " pod="calico-system/calico-node-cb58j" Apr 17 23:57:46.459462 kubelet[2741]: I0417 23:57:46.457364 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/85903b85-09bf-4674-9d7b-af054fd8867a-node-certs\") pod \"calico-node-cb58j\" (UID: \"85903b85-09bf-4674-9d7b-af054fd8867a\") " pod="calico-system/calico-node-cb58j" Apr 17 23:57:46.459875 kubelet[2741]: I0417 23:57:46.457392 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/85903b85-09bf-4674-9d7b-af054fd8867a-policysync\") pod \"calico-node-cb58j\" (UID: \"85903b85-09bf-4674-9d7b-af054fd8867a\") " pod="calico-system/calico-node-cb58j" Apr 17 23:57:46.459875 kubelet[2741]: I0417 23:57:46.457418 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/85903b85-09bf-4674-9d7b-af054fd8867a-sys-fs\") pod \"calico-node-cb58j\" (UID: \"85903b85-09bf-4674-9d7b-af054fd8867a\") " pod="calico-system/calico-node-cb58j" Apr 17 23:57:46.459875 kubelet[2741]: I0417 23:57:46.457442 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85903b85-09bf-4674-9d7b-af054fd8867a-tigera-ca-bundle\") pod \"calico-node-cb58j\" (UID: \"85903b85-09bf-4674-9d7b-af054fd8867a\") " pod="calico-system/calico-node-cb58j" Apr 17 23:57:46.459875 kubelet[2741]: I0417 23:57:46.457467 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/85903b85-09bf-4674-9d7b-af054fd8867a-var-run-calico\") pod \"calico-node-cb58j\" (UID: \"85903b85-09bf-4674-9d7b-af054fd8867a\") " pod="calico-system/calico-node-cb58j" Apr 17 23:57:46.459875 kubelet[2741]: I0417 23:57:46.457493 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85903b85-09bf-4674-9d7b-af054fd8867a-lib-modules\") pod \"calico-node-cb58j\" (UID: \"85903b85-09bf-4674-9d7b-af054fd8867a\") " pod="calico-system/calico-node-cb58j" Apr 17 23:57:46.460069 kubelet[2741]: I0417 23:57:46.457526 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85903b85-09bf-4674-9d7b-af054fd8867a-xtables-lock\") pod \"calico-node-cb58j\" (UID: \"85903b85-09bf-4674-9d7b-af054fd8867a\") " pod="calico-system/calico-node-cb58j" Apr 17 23:57:46.460069 kubelet[2741]: I0417 23:57:46.457553 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/85903b85-09bf-4674-9d7b-af054fd8867a-bpffs\") pod \"calico-node-cb58j\" (UID: \"85903b85-09bf-4674-9d7b-af054fd8867a\") " pod="calico-system/calico-node-cb58j" Apr 17 23:57:46.460069 kubelet[2741]: I0417 23:57:46.457599 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/85903b85-09bf-4674-9d7b-af054fd8867a-cni-bin-dir\") pod \"calico-node-cb58j\" (UID: \"85903b85-09bf-4674-9d7b-af054fd8867a\") " pod="calico-system/calico-node-cb58j" Apr 17 23:57:46.460069 kubelet[2741]: I0417 23:57:46.457637 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/85903b85-09bf-4674-9d7b-af054fd8867a-flexvol-driver-host\") pod \"calico-node-cb58j\" (UID: \"85903b85-09bf-4674-9d7b-af054fd8867a\") " pod="calico-system/calico-node-cb58j" Apr 17 23:57:46.460069 kubelet[2741]: I0417 23:57:46.457699 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/85903b85-09bf-4674-9d7b-af054fd8867a-nodeproc\") pod \"calico-node-cb58j\" (UID: \"85903b85-09bf-4674-9d7b-af054fd8867a\") " pod="calico-system/calico-node-cb58j" Apr 17 23:57:46.531031 kubelet[2741]: E0417 23:57:46.530578 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xt426" podUID="8b300517-4fdc-4aae-b868-e2f538976f49" Apr 17 23:57:46.558544 kubelet[2741]: I0417 23:57:46.558085 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b300517-4fdc-4aae-b868-e2f538976f49-kubelet-dir\") pod \"csi-node-driver-xt426\" (UID: \"8b300517-4fdc-4aae-b868-e2f538976f49\") " pod="calico-system/csi-node-driver-xt426" Apr 17 23:57:46.559156 kubelet[2741]: I0417 23:57:46.559036 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8b300517-4fdc-4aae-b868-e2f538976f49-socket-dir\") pod \"csi-node-driver-xt426\" (UID: \"8b300517-4fdc-4aae-b868-e2f538976f49\") " pod="calico-system/csi-node-driver-xt426" Apr 17 23:57:46.559267 kubelet[2741]: I0417 23:57:46.559241 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8b300517-4fdc-4aae-b868-e2f538976f49-varrun\") pod \"csi-node-driver-xt426\" (UID: \"8b300517-4fdc-4aae-b868-e2f538976f49\") " pod="calico-system/csi-node-driver-xt426" Apr 17 23:57:46.559338 kubelet[2741]: I0417 23:57:46.559304 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8b300517-4fdc-4aae-b868-e2f538976f49-registration-dir\") pod \"csi-node-driver-xt426\" (UID: \"8b300517-4fdc-4aae-b868-e2f538976f49\") " pod="calico-system/csi-node-driver-xt426" Apr 17 23:57:46.559386 kubelet[2741]: I0417 23:57:46.559341 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbwbq\" (UniqueName: \"kubernetes.io/projected/8b300517-4fdc-4aae-b868-e2f538976f49-kube-api-access-nbwbq\") pod \"csi-node-driver-xt426\" (UID: \"8b300517-4fdc-4aae-b868-e2f538976f49\") " pod="calico-system/csi-node-driver-xt426" Apr 17 23:57:46.570911 kubelet[2741]: E0417 23:57:46.570879 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:46.571259 kubelet[2741]: W0417 23:57:46.571236 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:46.571387 kubelet[2741]: E0417 23:57:46.571368 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:46.586525 kubelet[2741]: E0417 23:57:46.585919 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:46.586676 kubelet[2741]: W0417 23:57:46.586653 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:46.586899 kubelet[2741]: E0417 23:57:46.586848 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:46.623666 kubelet[2741]: E0417 23:57:46.623528 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:46.624789 containerd[1571]: time="2026-04-17T23:57:46.624585964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59d9f4cf57-ppn79,Uid:00e2c3e5-50e0-4c20-aa58-7d14d2d05fc9,Namespace:calico-system,Attempt:0,}" Apr 17 23:57:46.660674 kubelet[2741]: E0417 23:57:46.660631 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:46.660674 kubelet[2741]: W0417 23:57:46.660667 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:46.660974 kubelet[2741]: E0417 23:57:46.660696 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:46.662626 kubelet[2741]: E0417 23:57:46.662436 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:46.662626 kubelet[2741]: W0417 23:57:46.662461 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:46.662626 kubelet[2741]: E0417 23:57:46.662490 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:46.662851 kubelet[2741]: E0417 23:57:46.662822 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:46.662851 kubelet[2741]: W0417 23:57:46.662845 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:46.662987 kubelet[2741]: E0417 23:57:46.662863 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:46.663279 kubelet[2741]: E0417 23:57:46.663264 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:46.663413 kubelet[2741]: W0417 23:57:46.663346 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:46.663413 kubelet[2741]: E0417 23:57:46.663366 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:46.664152 kubelet[2741]: E0417 23:57:46.663948 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:46.664152 kubelet[2741]: W0417 23:57:46.663976 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:46.664152 kubelet[2741]: E0417 23:57:46.663989 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:46.664501 kubelet[2741]: E0417 23:57:46.664478 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:46.664501 kubelet[2741]: W0417 23:57:46.664496 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:46.664607 kubelet[2741]: E0417 23:57:46.664510 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:46.665576 kubelet[2741]: E0417 23:57:46.665397 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:46.665576 kubelet[2741]: W0417 23:57:46.665413 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:46.665576 kubelet[2741]: E0417 23:57:46.665430 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:46.665754 kubelet[2741]: E0417 23:57:46.665727 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:46.665754 kubelet[2741]: W0417 23:57:46.665749 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:46.665814 kubelet[2741]: E0417 23:57:46.665762 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:46.666645 kubelet[2741]: E0417 23:57:46.666627 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:46.666645 kubelet[2741]: W0417 23:57:46.666642 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:46.666783 kubelet[2741]: E0417 23:57:46.666653 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:46.668079 kubelet[2741]: E0417 23:57:46.668011 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:46.668079 kubelet[2741]: W0417 23:57:46.668027 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:46.668079 kubelet[2741]: E0417 23:57:46.668040 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:46.668977 kubelet[2741]: E0417 23:57:46.668360 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:46.668977 kubelet[2741]: W0417 23:57:46.668370 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:46.668977 kubelet[2741]: E0417 23:57:46.668381 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:46.668977 kubelet[2741]: E0417 23:57:46.668680 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:46.668977 kubelet[2741]: W0417 23:57:46.668689 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:46.668977 kubelet[2741]: E0417 23:57:46.668698 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:46.670024 kubelet[2741]: E0417 23:57:46.669402 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:46.670024 kubelet[2741]: W0417 23:57:46.669417 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:46.670024 kubelet[2741]: E0417 23:57:46.669426 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:46.671825 kubelet[2741]: E0417 23:57:46.671805 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:46.671825 kubelet[2741]: W0417 23:57:46.671821 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:46.671926 kubelet[2741]: E0417 23:57:46.671837 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:46.672423 kubelet[2741]: E0417 23:57:46.672403 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:46.672423 kubelet[2741]: W0417 23:57:46.672418 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:46.672532 kubelet[2741]: E0417 23:57:46.672432 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:46.672725 kubelet[2741]: E0417 23:57:46.672710 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:46.672725 kubelet[2741]: W0417 23:57:46.672723 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:46.672787 kubelet[2741]: E0417 23:57:46.672733 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:46.673374 kubelet[2741]: E0417 23:57:46.673356 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:46.673374 kubelet[2741]: W0417 23:57:46.673370 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:46.673461 kubelet[2741]: E0417 23:57:46.673380 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:46.673991 kubelet[2741]: E0417 23:57:46.673961 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:46.673991 kubelet[2741]: W0417 23:57:46.673979 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:46.674072 kubelet[2741]: E0417 23:57:46.673992 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:46.674766 kubelet[2741]: E0417 23:57:46.674738 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:46.674766 kubelet[2741]: W0417 23:57:46.674752 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:46.674766 kubelet[2741]: E0417 23:57:46.674762 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:46.675543 kubelet[2741]: E0417 23:57:46.675506 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:46.675543 kubelet[2741]: W0417 23:57:46.675532 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:46.675638 kubelet[2741]: E0417 23:57:46.675549 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:46.676321 kubelet[2741]: E0417 23:57:46.676299 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:46.676321 kubelet[2741]: W0417 23:57:46.676318 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:46.676424 kubelet[2741]: E0417 23:57:46.676333 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:46.677330 kubelet[2741]: E0417 23:57:46.677295 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:46.677330 kubelet[2741]: W0417 23:57:46.677314 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:46.677330 kubelet[2741]: E0417 23:57:46.677325 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:46.678147 kubelet[2741]: E0417 23:57:46.678085 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:46.678147 kubelet[2741]: W0417 23:57:46.678103 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:46.678147 kubelet[2741]: E0417 23:57:46.678113 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:46.678818 kubelet[2741]: E0417 23:57:46.678797 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:46.678818 kubelet[2741]: W0417 23:57:46.678812 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:46.678903 kubelet[2741]: E0417 23:57:46.678824 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:46.679617 kubelet[2741]: E0417 23:57:46.679526 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:46.679617 kubelet[2741]: W0417 23:57:46.679546 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:46.679617 kubelet[2741]: E0417 23:57:46.679589 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:46.683819 containerd[1571]: time="2026-04-17T23:57:46.681963501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:57:46.683819 containerd[1571]: time="2026-04-17T23:57:46.683738282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:57:46.683819 containerd[1571]: time="2026-04-17T23:57:46.683758702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:57:46.684335 containerd[1571]: time="2026-04-17T23:57:46.684270366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:57:46.689695 kubelet[2741]: E0417 23:57:46.689633 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:46.689695 kubelet[2741]: W0417 23:57:46.689662 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:46.689695 kubelet[2741]: E0417 23:57:46.689685 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:46.843027 containerd[1571]: time="2026-04-17T23:57:46.837889105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cb58j,Uid:85903b85-09bf-4674-9d7b-af054fd8867a,Namespace:calico-system,Attempt:0,}" Apr 17 23:57:46.950169 containerd[1571]: time="2026-04-17T23:57:46.950041936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:57:46.950323 containerd[1571]: time="2026-04-17T23:57:46.950171505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:57:46.950323 containerd[1571]: time="2026-04-17T23:57:46.950218224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:57:46.950474 containerd[1571]: time="2026-04-17T23:57:46.950351343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:57:46.960668 containerd[1571]: time="2026-04-17T23:57:46.960603233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59d9f4cf57-ppn79,Uid:00e2c3e5-50e0-4c20-aa58-7d14d2d05fc9,Namespace:calico-system,Attempt:0,} returns sandbox id \"04e51628f443b0542202216903e72f93769cb7f91bbcb01b34d277d83feb0014\"" Apr 17 23:57:46.962582 kubelet[2741]: E0417 23:57:46.961866 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:46.963884 containerd[1571]: time="2026-04-17T23:57:46.963860048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 17 23:57:47.114272 containerd[1571]: time="2026-04-17T23:57:47.114203807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cb58j,Uid:85903b85-09bf-4674-9d7b-af054fd8867a,Namespace:calico-system,Attempt:0,} returns sandbox id \"f33eca45f6c78869e20850bfb8e979891c487b3521e443dbf31bb8b447b39382\"" Apr 17 23:57:47.925381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1212040814.mount: Deactivated successfully. Apr 17 23:57:48.512806 kubelet[2741]: E0417 23:57:48.510474 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xt426" podUID="8b300517-4fdc-4aae-b868-e2f538976f49" Apr 17 23:57:50.143144 containerd[1571]: time="2026-04-17T23:57:50.143052163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:50.144361 containerd[1571]: time="2026-04-17T23:57:50.144226553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 17 23:57:50.148170 containerd[1571]: time="2026-04-17T23:57:50.147693364Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:50.149183 containerd[1571]: time="2026-04-17T23:57:50.149150472Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 3.185116136s" Apr 17 23:57:50.149320 containerd[1571]: time="2026-04-17T23:57:50.149298771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 17 23:57:50.150058 containerd[1571]: time="2026-04-17T23:57:50.150010235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:50.152570 containerd[1571]: time="2026-04-17T23:57:50.152528624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 17 23:57:50.198932 containerd[1571]: time="2026-04-17T23:57:50.198883822Z" level=info msg="CreateContainer within sandbox \"04e51628f443b0542202216903e72f93769cb7f91bbcb01b34d277d83feb0014\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 17 23:57:50.220360 containerd[1571]: time="2026-04-17T23:57:50.220227246Z" level=info msg="CreateContainer within sandbox \"04e51628f443b0542202216903e72f93769cb7f91bbcb01b34d277d83feb0014\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0fd8cd38b4206d23e96067f58474e46db3cb675de249b37127bba4cae753e75a\"" Apr 17 23:57:50.222523 containerd[1571]: time="2026-04-17T23:57:50.222312278Z" level=info msg="StartContainer for \"0fd8cd38b4206d23e96067f58474e46db3cb675de249b37127bba4cae753e75a\"" Apr 17 23:57:50.453711 containerd[1571]: time="2026-04-17T23:57:50.453526130Z" level=info msg="StartContainer for \"0fd8cd38b4206d23e96067f58474e46db3cb675de249b37127bba4cae753e75a\" returns successfully" Apr 17 23:57:50.514025 kubelet[2741]: E0417 23:57:50.513820 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xt426" podUID="8b300517-4fdc-4aae-b868-e2f538976f49" Apr 17 23:57:51.076779 kubelet[2741]: E0417 23:57:51.076412 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:51.082364 kubelet[2741]: E0417 23:57:51.082305 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.082768 kubelet[2741]: W0417 23:57:51.082479 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.082768 kubelet[2741]: E0417 23:57:51.082549 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.083110 kubelet[2741]: E0417 23:57:51.083093 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.083620 kubelet[2741]: W0417 23:57:51.083322 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.083620 kubelet[2741]: E0417 23:57:51.083441 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.084165 kubelet[2741]: E0417 23:57:51.084024 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.084165 kubelet[2741]: W0417 23:57:51.084043 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.084165 kubelet[2741]: E0417 23:57:51.084058 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.085187 kubelet[2741]: E0417 23:57:51.084936 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.085187 kubelet[2741]: W0417 23:57:51.084952 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.085187 kubelet[2741]: E0417 23:57:51.085024 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.085939 kubelet[2741]: E0417 23:57:51.085814 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.086165 kubelet[2741]: W0417 23:57:51.086040 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.086165 kubelet[2741]: E0417 23:57:51.086059 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.088739 kubelet[2741]: E0417 23:57:51.087864 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.088739 kubelet[2741]: W0417 23:57:51.087881 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.088739 kubelet[2741]: E0417 23:57:51.087895 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.088739 kubelet[2741]: E0417 23:57:51.088226 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.088739 kubelet[2741]: W0417 23:57:51.088238 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.088739 kubelet[2741]: E0417 23:57:51.088249 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.088739 kubelet[2741]: E0417 23:57:51.088534 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.088739 kubelet[2741]: W0417 23:57:51.088551 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.088739 kubelet[2741]: E0417 23:57:51.088569 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.089294 kubelet[2741]: E0417 23:57:51.089277 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.089515 kubelet[2741]: W0417 23:57:51.089384 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.089515 kubelet[2741]: E0417 23:57:51.089403 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.089923 kubelet[2741]: E0417 23:57:51.089765 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.089923 kubelet[2741]: W0417 23:57:51.089780 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.089923 kubelet[2741]: E0417 23:57:51.089792 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.090288 kubelet[2741]: E0417 23:57:51.090272 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.090397 kubelet[2741]: W0417 23:57:51.090379 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.090481 kubelet[2741]: E0417 23:57:51.090465 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.090857 kubelet[2741]: E0417 23:57:51.090840 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.090943 kubelet[2741]: W0417 23:57:51.090922 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.091183 kubelet[2741]: E0417 23:57:51.091012 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.091424 kubelet[2741]: E0417 23:57:51.091408 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.091580 kubelet[2741]: W0417 23:57:51.091558 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.091860 kubelet[2741]: E0417 23:57:51.091652 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.092097 kubelet[2741]: E0417 23:57:51.092080 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.092405 kubelet[2741]: W0417 23:57:51.092181 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.092405 kubelet[2741]: E0417 23:57:51.092206 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.092947 kubelet[2741]: E0417 23:57:51.092750 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.092947 kubelet[2741]: W0417 23:57:51.092766 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.092947 kubelet[2741]: E0417 23:57:51.092780 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.105727 kubelet[2741]: I0417 23:57:51.105537 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-59d9f4cf57-ppn79" podStartSLOduration=1.917019964 podStartE2EDuration="5.10550276s" podCreationTimestamp="2026-04-17 23:57:46 +0000 UTC" firstStartedPulling="2026-04-17 23:57:46.963435553 +0000 UTC m=+23.609684096" lastFinishedPulling="2026-04-17 23:57:50.151918349 +0000 UTC m=+26.798166892" observedRunningTime="2026-04-17 23:57:51.099475037 +0000 UTC m=+27.745723590" watchObservedRunningTime="2026-04-17 23:57:51.10550276 +0000 UTC m=+27.751751303" Apr 17 23:57:51.186587 kubelet[2741]: E0417 23:57:51.186564 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.186684 kubelet[2741]: W0417 23:57:51.186670 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.186784 kubelet[2741]: E0417 23:57:51.186769 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.187857 kubelet[2741]: E0417 23:57:51.187757 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.187857 kubelet[2741]: W0417 23:57:51.187769 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.187857 kubelet[2741]: E0417 23:57:51.187780 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.188472 kubelet[2741]: E0417 23:57:51.188375 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.188472 kubelet[2741]: W0417 23:57:51.188386 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.188472 kubelet[2741]: E0417 23:57:51.188395 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.189314 kubelet[2741]: E0417 23:57:51.189208 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.189314 kubelet[2741]: W0417 23:57:51.189221 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.189314 kubelet[2741]: E0417 23:57:51.189234 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.189784 kubelet[2741]: E0417 23:57:51.189669 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.189784 kubelet[2741]: W0417 23:57:51.189679 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.189784 kubelet[2741]: E0417 23:57:51.189688 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.190176 kubelet[2741]: E0417 23:57:51.190051 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.190176 kubelet[2741]: W0417 23:57:51.190062 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.190176 kubelet[2741]: E0417 23:57:51.190070 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.191350 kubelet[2741]: E0417 23:57:51.191338 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.191611 kubelet[2741]: W0417 23:57:51.191430 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.191611 kubelet[2741]: E0417 23:57:51.191448 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.192170 kubelet[2741]: E0417 23:57:51.192157 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.192233 kubelet[2741]: W0417 23:57:51.192223 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.192376 kubelet[2741]: E0417 23:57:51.192284 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.192698 kubelet[2741]: E0417 23:57:51.192652 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.192698 kubelet[2741]: W0417 23:57:51.192666 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.192698 kubelet[2741]: E0417 23:57:51.192679 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.194974 kubelet[2741]: E0417 23:57:51.193945 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.194974 kubelet[2741]: W0417 23:57:51.193958 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.194974 kubelet[2741]: E0417 23:57:51.193968 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.195755 kubelet[2741]: E0417 23:57:51.195742 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.195807 kubelet[2741]: W0417 23:57:51.195797 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.195850 kubelet[2741]: E0417 23:57:51.195840 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.196307 kubelet[2741]: E0417 23:57:51.196296 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.196359 kubelet[2741]: W0417 23:57:51.196349 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.196437 kubelet[2741]: E0417 23:57:51.196424 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.196917 kubelet[2741]: E0417 23:57:51.196905 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.196981 kubelet[2741]: W0417 23:57:51.196970 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.197043 kubelet[2741]: E0417 23:57:51.197033 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.198467 kubelet[2741]: E0417 23:57:51.198455 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.198528 kubelet[2741]: W0417 23:57:51.198517 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.198575 kubelet[2741]: E0417 23:57:51.198561 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.199738 kubelet[2741]: E0417 23:57:51.199691 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.199738 kubelet[2741]: W0417 23:57:51.199706 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.199738 kubelet[2741]: E0417 23:57:51.199719 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.201225 kubelet[2741]: E0417 23:57:51.200669 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.201225 kubelet[2741]: W0417 23:57:51.200686 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.201225 kubelet[2741]: E0417 23:57:51.200735 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.203589 kubelet[2741]: E0417 23:57:51.203156 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.203589 kubelet[2741]: W0417 23:57:51.203210 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.203589 kubelet[2741]: E0417 23:57:51.203223 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.203889 kubelet[2741]: E0417 23:57:51.203858 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:57:51.203889 kubelet[2741]: W0417 23:57:51.203888 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:57:51.203969 kubelet[2741]: E0417 23:57:51.203902 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:57:51.206760 containerd[1571]: time="2026-04-17T23:57:51.206725307Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:51.208093 containerd[1571]: time="2026-04-17T23:57:51.208016727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 17 23:57:51.208760 containerd[1571]: time="2026-04-17T23:57:51.208738602Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:51.210926 containerd[1571]: time="2026-04-17T23:57:51.210898145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:51.211843 containerd[1571]: time="2026-04-17T23:57:51.211806838Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.059217804s" Apr 17 23:57:51.211883 containerd[1571]: time="2026-04-17T23:57:51.211849008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 17 23:57:51.216710 containerd[1571]: time="2026-04-17T23:57:51.216661000Z" level=info msg="CreateContainer within sandbox \"f33eca45f6c78869e20850bfb8e979891c487b3521e443dbf31bb8b447b39382\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 17 23:57:51.234243 containerd[1571]: time="2026-04-17T23:57:51.234203814Z" level=info msg="CreateContainer within sandbox \"f33eca45f6c78869e20850bfb8e979891c487b3521e443dbf31bb8b447b39382\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e12b384f660e5062a442f1de765d29e95a497837704d71e26f1c18918314dec2\"" Apr 17 23:57:51.235484 containerd[1571]: time="2026-04-17T23:57:51.235085798Z" level=info msg="StartContainer for \"e12b384f660e5062a442f1de765d29e95a497837704d71e26f1c18918314dec2\"" Apr 17 23:57:51.236249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3107826255.mount: Deactivated successfully. Apr 17 23:57:51.379667 containerd[1571]: time="2026-04-17T23:57:51.379625849Z" level=info msg="StartContainer for \"e12b384f660e5062a442f1de765d29e95a497837704d71e26f1c18918314dec2\" returns successfully" Apr 17 23:57:51.653452 containerd[1571]: time="2026-04-17T23:57:51.653102192Z" level=info msg="shim disconnected" id=e12b384f660e5062a442f1de765d29e95a497837704d71e26f1c18918314dec2 namespace=k8s.io Apr 17 23:57:51.653452 containerd[1571]: time="2026-04-17T23:57:51.653270921Z" level=warning msg="cleaning up after shim disconnected" id=e12b384f660e5062a442f1de765d29e95a497837704d71e26f1c18918314dec2 namespace=k8s.io Apr 17 23:57:51.653452 containerd[1571]: time="2026-04-17T23:57:51.653281891Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:57:52.075620 kubelet[2741]: I0417 23:57:52.075469 2741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:57:52.076246 kubelet[2741]: E0417 23:57:52.076109 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:57:52.097384 containerd[1571]: time="2026-04-17T23:57:52.097291091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 17 23:57:52.180789 systemd[1]: run-containerd-runc-k8s.io-e12b384f660e5062a442f1de765d29e95a497837704d71e26f1c18918314dec2-runc.E7pQP2.mount: Deactivated successfully. Apr 17 23:57:52.181055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e12b384f660e5062a442f1de765d29e95a497837704d71e26f1c18918314dec2-rootfs.mount: Deactivated successfully. Apr 17 23:57:52.509650 kubelet[2741]: E0417 23:57:52.509510 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xt426" podUID="8b300517-4fdc-4aae-b868-e2f538976f49" Apr 17 23:57:54.510293 kubelet[2741]: E0417 23:57:54.509682 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xt426" podUID="8b300517-4fdc-4aae-b868-e2f538976f49" Apr 17 23:57:56.510911 kubelet[2741]: E0417 23:57:56.510705 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xt426" podUID="8b300517-4fdc-4aae-b868-e2f538976f49" Apr 17 23:57:57.577936 kubelet[2741]: E0417 23:57:57.577828 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xt426" podUID="8b300517-4fdc-4aae-b868-e2f538976f49" Apr 17 23:57:59.511558 kubelet[2741]: E0417 23:57:59.511472 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xt426" podUID="8b300517-4fdc-4aae-b868-e2f538976f49" Apr 17 23:58:01.647903 kubelet[2741]: E0417 23:58:01.643728 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xt426" podUID="8b300517-4fdc-4aae-b868-e2f538976f49" Apr 17 23:58:04.037218 kubelet[2741]: E0417 23:58:04.036875 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xt426" podUID="8b300517-4fdc-4aae-b868-e2f538976f49" Apr 17 23:58:05.528544 kubelet[2741]: E0417 23:58:05.527510 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xt426" podUID="8b300517-4fdc-4aae-b868-e2f538976f49" Apr 17 23:58:07.438167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2296054269.mount: Deactivated successfully. Apr 17 23:58:07.476472 containerd[1571]: time="2026-04-17T23:58:07.476368610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:07.479011 containerd[1571]: time="2026-04-17T23:58:07.478974513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 17 23:58:07.479575 containerd[1571]: time="2026-04-17T23:58:07.479554411Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:07.485380 containerd[1571]: time="2026-04-17T23:58:07.485342376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:07.488189 containerd[1571]: time="2026-04-17T23:58:07.488141848Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 15.390452789s" Apr 17 23:58:07.488447 containerd[1571]: time="2026-04-17T23:58:07.488375517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 17 23:58:07.498082 containerd[1571]: time="2026-04-17T23:58:07.498046480Z" level=info msg="CreateContainer within sandbox \"f33eca45f6c78869e20850bfb8e979891c487b3521e443dbf31bb8b447b39382\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 17 23:58:07.510670 kubelet[2741]: E0417 23:58:07.510610 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xt426" podUID="8b300517-4fdc-4aae-b868-e2f538976f49" Apr 17 23:58:07.522712 containerd[1571]: time="2026-04-17T23:58:07.522611253Z" level=info msg="CreateContainer within sandbox \"f33eca45f6c78869e20850bfb8e979891c487b3521e443dbf31bb8b447b39382\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"33f039772a6d407e11002d4a554895ad77c9b6e8904ab5942e109047c46189a3\"" Apr 17 23:58:07.525189 containerd[1571]: time="2026-04-17T23:58:07.524571147Z" level=info msg="StartContainer for \"33f039772a6d407e11002d4a554895ad77c9b6e8904ab5942e109047c46189a3\"" Apr 17 23:58:07.752335 containerd[1571]: time="2026-04-17T23:58:07.752217530Z" level=info msg="StartContainer for \"33f039772a6d407e11002d4a554895ad77c9b6e8904ab5942e109047c46189a3\" returns successfully" Apr 17 23:58:08.078215 containerd[1571]: time="2026-04-17T23:58:08.077935286Z" level=info msg="shim disconnected" id=33f039772a6d407e11002d4a554895ad77c9b6e8904ab5942e109047c46189a3 namespace=k8s.io Apr 17 23:58:08.078215 containerd[1571]: time="2026-04-17T23:58:08.078066346Z" level=warning msg="cleaning up after shim disconnected" id=33f039772a6d407e11002d4a554895ad77c9b6e8904ab5942e109047c46189a3 namespace=k8s.io Apr 17 23:58:08.078215 containerd[1571]: time="2026-04-17T23:58:08.078079206Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:58:08.407545 containerd[1571]: time="2026-04-17T23:58:08.407494125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 17 23:58:08.435073 systemd[1]: run-containerd-runc-k8s.io-33f039772a6d407e11002d4a554895ad77c9b6e8904ab5942e109047c46189a3-runc.jftA2n.mount: Deactivated successfully. Apr 17 23:58:08.436793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33f039772a6d407e11002d4a554895ad77c9b6e8904ab5942e109047c46189a3-rootfs.mount: Deactivated successfully. Apr 17 23:58:09.512874 kubelet[2741]: E0417 23:58:09.512278 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xt426" podUID="8b300517-4fdc-4aae-b868-e2f538976f49" Apr 17 23:58:11.511533 kubelet[2741]: E0417 23:58:11.510056 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xt426" podUID="8b300517-4fdc-4aae-b868-e2f538976f49" Apr 17 23:58:13.348869 containerd[1571]: time="2026-04-17T23:58:13.348742893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:13.355151 containerd[1571]: time="2026-04-17T23:58:13.351305811Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 17 23:58:13.355151 containerd[1571]: time="2026-04-17T23:58:13.352089165Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:13.356301 containerd[1571]: time="2026-04-17T23:58:13.356178799Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:13.358157 containerd[1571]: time="2026-04-17T23:58:13.357773160Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 4.950221934s" Apr 17 23:58:13.358157 containerd[1571]: time="2026-04-17T23:58:13.357849065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 17 23:58:13.370561 containerd[1571]: time="2026-04-17T23:58:13.370483413Z" level=info msg="CreateContainer within sandbox \"f33eca45f6c78869e20850bfb8e979891c487b3521e443dbf31bb8b447b39382\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 17 23:58:13.396152 containerd[1571]: time="2026-04-17T23:58:13.392905251Z" level=info msg="CreateContainer within sandbox \"f33eca45f6c78869e20850bfb8e979891c487b3521e443dbf31bb8b447b39382\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"652fda5fdf8d1ed7850fd4c46e51d2936ae39f44f3b0c4edd7c3bbd8b9a39c08\"" Apr 17 23:58:13.396152 containerd[1571]: time="2026-04-17T23:58:13.394392434Z" level=info msg="StartContainer for \"652fda5fdf8d1ed7850fd4c46e51d2936ae39f44f3b0c4edd7c3bbd8b9a39c08\"" Apr 17 23:58:13.398325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3948283817.mount: Deactivated successfully. Apr 17 23:58:13.522168 kubelet[2741]: E0417 23:58:13.522050 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xt426" podUID="8b300517-4fdc-4aae-b868-e2f538976f49" Apr 17 23:58:13.759607 kubelet[2741]: I0417 23:58:13.759540 2741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:58:13.761537 kubelet[2741]: E0417 23:58:13.761495 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:58:13.919714 containerd[1571]: time="2026-04-17T23:58:13.918341825Z" level=info msg="StartContainer for \"652fda5fdf8d1ed7850fd4c46e51d2936ae39f44f3b0c4edd7c3bbd8b9a39c08\" returns successfully" Apr 17 23:58:14.467985 kubelet[2741]: E0417 23:58:14.467948 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:58:14.813178 containerd[1571]: time="2026-04-17T23:58:14.811667249Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:58:14.846155 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-652fda5fdf8d1ed7850fd4c46e51d2936ae39f44f3b0c4edd7c3bbd8b9a39c08-rootfs.mount: Deactivated successfully. Apr 17 23:58:14.848471 containerd[1571]: time="2026-04-17T23:58:14.848299454Z" level=info msg="shim disconnected" id=652fda5fdf8d1ed7850fd4c46e51d2936ae39f44f3b0c4edd7c3bbd8b9a39c08 namespace=k8s.io Apr 17 23:58:14.848471 containerd[1571]: time="2026-04-17T23:58:14.848468045Z" level=warning msg="cleaning up after shim disconnected" id=652fda5fdf8d1ed7850fd4c46e51d2936ae39f44f3b0c4edd7c3bbd8b9a39c08 namespace=k8s.io Apr 17 23:58:14.848700 containerd[1571]: time="2026-04-17T23:58:14.848479116Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:58:14.852765 kubelet[2741]: I0417 23:58:14.852535 2741 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 17 23:58:14.978635 kubelet[2741]: I0417 23:58:14.978044 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4eb46d8d-9eca-422e-ba74-930bbc0b7688-config-volume\") pod \"coredns-674b8bbfcf-rqhzd\" (UID: \"4eb46d8d-9eca-422e-ba74-930bbc0b7688\") " pod="kube-system/coredns-674b8bbfcf-rqhzd" Apr 17 23:58:14.978635 kubelet[2741]: I0417 23:58:14.978099 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3c8d0225-44a8-413f-8e5f-1d6fb1b28b65-whisker-backend-key-pair\") pod \"whisker-6db78f5767-pb686\" (UID: \"3c8d0225-44a8-413f-8e5f-1d6fb1b28b65\") " pod="calico-system/whisker-6db78f5767-pb686" Apr 17 23:58:14.978635 kubelet[2741]: I0417 23:58:14.978159 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rkct\" (UniqueName: \"kubernetes.io/projected/47346385-5f17-4c08-b4a3-a3958d4f414b-kube-api-access-5rkct\") pod \"calico-apiserver-6fff8bdfbc-mvj4v\" (UID: \"47346385-5f17-4c08-b4a3-a3958d4f414b\") " pod="calico-system/calico-apiserver-6fff8bdfbc-mvj4v" Apr 17 23:58:14.978635 kubelet[2741]: I0417 23:58:14.978184 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8c5738b-2e74-4bd4-9c11-4f37db2195d6-config\") pod \"goldmane-5b85766d88-l2kf2\" (UID: \"e8c5738b-2e74-4bd4-9c11-4f37db2195d6\") " pod="calico-system/goldmane-5b85766d88-l2kf2" Apr 17 23:58:14.978635 kubelet[2741]: I0417 23:58:14.978199 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8c5738b-2e74-4bd4-9c11-4f37db2195d6-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-l2kf2\" (UID: \"e8c5738b-2e74-4bd4-9c11-4f37db2195d6\") " pod="calico-system/goldmane-5b85766d88-l2kf2" Apr 17 23:58:14.978919 kubelet[2741]: I0417 23:58:14.978212 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d9a00af0-47b3-4d37-9e23-a56a19b9db0e-calico-apiserver-certs\") pod \"calico-apiserver-6fff8bdfbc-7mc8b\" (UID: \"d9a00af0-47b3-4d37-9e23-a56a19b9db0e\") " pod="calico-system/calico-apiserver-6fff8bdfbc-7mc8b" Apr 17 23:58:14.978919 kubelet[2741]: I0417 23:58:14.978227 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97tzg\" (UniqueName: \"kubernetes.io/projected/3c8d0225-44a8-413f-8e5f-1d6fb1b28b65-kube-api-access-97tzg\") pod \"whisker-6db78f5767-pb686\" (UID: \"3c8d0225-44a8-413f-8e5f-1d6fb1b28b65\") " pod="calico-system/whisker-6db78f5767-pb686" Apr 17 23:58:14.978919 kubelet[2741]: I0417 23:58:14.978249 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xc7c\" (UniqueName: \"kubernetes.io/projected/e8c5738b-2e74-4bd4-9c11-4f37db2195d6-kube-api-access-9xc7c\") pod \"goldmane-5b85766d88-l2kf2\" (UID: \"e8c5738b-2e74-4bd4-9c11-4f37db2195d6\") " pod="calico-system/goldmane-5b85766d88-l2kf2" Apr 17 23:58:14.978919 kubelet[2741]: I0417 23:58:14.978262 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhss9\" (UniqueName: \"kubernetes.io/projected/d9a00af0-47b3-4d37-9e23-a56a19b9db0e-kube-api-access-fhss9\") pod \"calico-apiserver-6fff8bdfbc-7mc8b\" (UID: \"d9a00af0-47b3-4d37-9e23-a56a19b9db0e\") " pod="calico-system/calico-apiserver-6fff8bdfbc-7mc8b" Apr 17 23:58:14.978919 kubelet[2741]: I0417 23:58:14.978276 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3c8d0225-44a8-413f-8e5f-1d6fb1b28b65-nginx-config\") pod \"whisker-6db78f5767-pb686\" (UID: \"3c8d0225-44a8-413f-8e5f-1d6fb1b28b65\") " pod="calico-system/whisker-6db78f5767-pb686" Apr 17 23:58:14.979051 kubelet[2741]: I0417 23:58:14.978361 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c8d0225-44a8-413f-8e5f-1d6fb1b28b65-whisker-ca-bundle\") pod \"whisker-6db78f5767-pb686\" (UID: \"3c8d0225-44a8-413f-8e5f-1d6fb1b28b65\") " pod="calico-system/whisker-6db78f5767-pb686" Apr 17 23:58:14.979051 kubelet[2741]: I0417 23:58:14.978380 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29nkn\" (UniqueName: \"kubernetes.io/projected/4eb46d8d-9eca-422e-ba74-930bbc0b7688-kube-api-access-29nkn\") pod \"coredns-674b8bbfcf-rqhzd\" (UID: \"4eb46d8d-9eca-422e-ba74-930bbc0b7688\") " pod="kube-system/coredns-674b8bbfcf-rqhzd" Apr 17 23:58:14.979051 kubelet[2741]: I0417 23:58:14.978419 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrnnj\" (UniqueName: \"kubernetes.io/projected/0c48190d-40ea-4a8e-95db-e61cbffe8eda-kube-api-access-hrnnj\") pod \"coredns-674b8bbfcf-88tx2\" (UID: \"0c48190d-40ea-4a8e-95db-e61cbffe8eda\") " pod="kube-system/coredns-674b8bbfcf-88tx2" Apr 17 23:58:14.979051 kubelet[2741]: I0417 23:58:14.978440 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e8c5738b-2e74-4bd4-9c11-4f37db2195d6-goldmane-key-pair\") pod \"goldmane-5b85766d88-l2kf2\" (UID: \"e8c5738b-2e74-4bd4-9c11-4f37db2195d6\") " pod="calico-system/goldmane-5b85766d88-l2kf2" Apr 17 23:58:14.979051 kubelet[2741]: I0417 23:58:14.978455 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dprzl\" (UniqueName: \"kubernetes.io/projected/83eecf55-6f3a-4d4f-964b-d260567b14a7-kube-api-access-dprzl\") pod \"calico-kube-controllers-5b464cddd7-wff8q\" (UID: \"83eecf55-6f3a-4d4f-964b-d260567b14a7\") " pod="calico-system/calico-kube-controllers-5b464cddd7-wff8q" Apr 17 23:58:14.979209 kubelet[2741]: I0417 23:58:14.978470 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c48190d-40ea-4a8e-95db-e61cbffe8eda-config-volume\") pod \"coredns-674b8bbfcf-88tx2\" (UID: \"0c48190d-40ea-4a8e-95db-e61cbffe8eda\") " pod="kube-system/coredns-674b8bbfcf-88tx2" Apr 17 23:58:14.979209 kubelet[2741]: I0417 23:58:14.978484 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83eecf55-6f3a-4d4f-964b-d260567b14a7-tigera-ca-bundle\") pod \"calico-kube-controllers-5b464cddd7-wff8q\" (UID: \"83eecf55-6f3a-4d4f-964b-d260567b14a7\") " pod="calico-system/calico-kube-controllers-5b464cddd7-wff8q" Apr 17 23:58:14.979209 kubelet[2741]: I0417 23:58:14.978499 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/47346385-5f17-4c08-b4a3-a3958d4f414b-calico-apiserver-certs\") pod \"calico-apiserver-6fff8bdfbc-mvj4v\" (UID: \"47346385-5f17-4c08-b4a3-a3958d4f414b\") " pod="calico-system/calico-apiserver-6fff8bdfbc-mvj4v" Apr 17 23:58:15.215054 containerd[1571]: time="2026-04-17T23:58:15.214996707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b464cddd7-wff8q,Uid:83eecf55-6f3a-4d4f-964b-d260567b14a7,Namespace:calico-system,Attempt:0,}" Apr 17 23:58:15.225564 kubelet[2741]: E0417 23:58:15.224332 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:58:15.226032 containerd[1571]: time="2026-04-17T23:58:15.226000880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rqhzd,Uid:4eb46d8d-9eca-422e-ba74-930bbc0b7688,Namespace:kube-system,Attempt:0,}" Apr 17 23:58:15.230450 containerd[1571]: time="2026-04-17T23:58:15.230392068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6db78f5767-pb686,Uid:3c8d0225-44a8-413f-8e5f-1d6fb1b28b65,Namespace:calico-system,Attempt:0,}" Apr 17 23:58:15.236165 containerd[1571]: time="2026-04-17T23:58:15.236100993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fff8bdfbc-mvj4v,Uid:47346385-5f17-4c08-b4a3-a3958d4f414b,Namespace:calico-system,Attempt:0,}" Apr 17 23:58:15.239968 containerd[1571]: time="2026-04-17T23:58:15.239941315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fff8bdfbc-7mc8b,Uid:d9a00af0-47b3-4d37-9e23-a56a19b9db0e,Namespace:calico-system,Attempt:0,}" Apr 17 23:58:15.243544 containerd[1571]: time="2026-04-17T23:58:15.243502619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-l2kf2,Uid:e8c5738b-2e74-4bd4-9c11-4f37db2195d6,Namespace:calico-system,Attempt:0,}" Apr 17 23:58:15.249879 kubelet[2741]: E0417 23:58:15.248851 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:58:15.251456 containerd[1571]: time="2026-04-17T23:58:15.251427520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-88tx2,Uid:0c48190d-40ea-4a8e-95db-e61cbffe8eda,Namespace:kube-system,Attempt:0,}" Apr 17 23:58:15.524879 containerd[1571]: time="2026-04-17T23:58:15.524730883Z" level=info msg="CreateContainer within sandbox \"f33eca45f6c78869e20850bfb8e979891c487b3521e443dbf31bb8b447b39382\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 17 23:58:15.531691 containerd[1571]: time="2026-04-17T23:58:15.531648357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xt426,Uid:8b300517-4fdc-4aae-b868-e2f538976f49,Namespace:calico-system,Attempt:0,}" Apr 17 23:58:15.594392 containerd[1571]: time="2026-04-17T23:58:15.594334275Z" level=error msg="Failed to destroy network for sandbox \"b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.595186 containerd[1571]: time="2026-04-17T23:58:15.595157849Z" level=error msg="encountered an error cleaning up failed sandbox \"b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.595335 containerd[1571]: time="2026-04-17T23:58:15.595299158Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-l2kf2,Uid:e8c5738b-2e74-4bd4-9c11-4f37db2195d6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.596819 kubelet[2741]: E0417 23:58:15.596522 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.596819 kubelet[2741]: E0417 23:58:15.596731 2741 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-l2kf2" Apr 17 23:58:15.596819 kubelet[2741]: E0417 23:58:15.596790 2741 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-l2kf2" Apr 17 23:58:15.598932 kubelet[2741]: E0417 23:58:15.596898 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-l2kf2_calico-system(e8c5738b-2e74-4bd4-9c11-4f37db2195d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-l2kf2_calico-system(e8c5738b-2e74-4bd4-9c11-4f37db2195d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-l2kf2" podUID="e8c5738b-2e74-4bd4-9c11-4f37db2195d6" Apr 17 23:58:15.611485 containerd[1571]: time="2026-04-17T23:58:15.611329591Z" level=info msg="CreateContainer within sandbox \"f33eca45f6c78869e20850bfb8e979891c487b3521e443dbf31bb8b447b39382\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4ed17383e70a2377d9272ed90da8e3ba4d4deebc768264e63b796827b28841b5\"" Apr 17 23:58:15.613296 containerd[1571]: time="2026-04-17T23:58:15.612537431Z" level=info msg="StartContainer for \"4ed17383e70a2377d9272ed90da8e3ba4d4deebc768264e63b796827b28841b5\"" Apr 17 23:58:15.644915 containerd[1571]: time="2026-04-17T23:58:15.644853453Z" level=error msg="Failed to destroy network for sandbox \"6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.645595 containerd[1571]: time="2026-04-17T23:58:15.645569090Z" level=error msg="encountered an error cleaning up failed sandbox \"6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.645800 containerd[1571]: time="2026-04-17T23:58:15.645681168Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-88tx2,Uid:0c48190d-40ea-4a8e-95db-e61cbffe8eda,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.646030 kubelet[2741]: E0417 23:58:15.645977 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.646232 kubelet[2741]: E0417 23:58:15.646196 2741 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-88tx2" Apr 17 23:58:15.646479 kubelet[2741]: E0417 23:58:15.646264 2741 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-88tx2" Apr 17 23:58:15.646479 kubelet[2741]: E0417 23:58:15.646391 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-88tx2_kube-system(0c48190d-40ea-4a8e-95db-e61cbffe8eda)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-88tx2_kube-system(0c48190d-40ea-4a8e-95db-e61cbffe8eda)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-88tx2" podUID="0c48190d-40ea-4a8e-95db-e61cbffe8eda" Apr 17 23:58:15.650804 containerd[1571]: time="2026-04-17T23:58:15.650768942Z" level=error msg="Failed to destroy network for sandbox \"ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.651313 containerd[1571]: time="2026-04-17T23:58:15.651277305Z" level=error msg="encountered an error cleaning up failed sandbox \"ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.653327 containerd[1571]: time="2026-04-17T23:58:15.653203272Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b464cddd7-wff8q,Uid:83eecf55-6f3a-4d4f-964b-d260567b14a7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.653601 kubelet[2741]: E0417 23:58:15.653555 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.653674 kubelet[2741]: E0417 23:58:15.653622 2741 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b464cddd7-wff8q" Apr 17 23:58:15.653674 kubelet[2741]: E0417 23:58:15.653666 2741 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b464cddd7-wff8q" Apr 17 23:58:15.654197 kubelet[2741]: E0417 23:58:15.653733 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b464cddd7-wff8q_calico-system(83eecf55-6f3a-4d4f-964b-d260567b14a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b464cddd7-wff8q_calico-system(83eecf55-6f3a-4d4f-964b-d260567b14a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b464cddd7-wff8q" podUID="83eecf55-6f3a-4d4f-964b-d260567b14a7" Apr 17 23:58:15.667538 containerd[1571]: time="2026-04-17T23:58:15.667478990Z" level=error msg="Failed to destroy network for sandbox \"fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.668363 containerd[1571]: time="2026-04-17T23:58:15.668295833Z" level=error msg="encountered an error cleaning up failed sandbox \"fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.668482 containerd[1571]: time="2026-04-17T23:58:15.668423162Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fff8bdfbc-mvj4v,Uid:47346385-5f17-4c08-b4a3-a3958d4f414b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.671163 kubelet[2741]: E0417 23:58:15.668818 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.671163 kubelet[2741]: E0417 23:58:15.668920 2741 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6fff8bdfbc-mvj4v" Apr 17 23:58:15.671163 kubelet[2741]: E0417 23:58:15.668958 2741 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6fff8bdfbc-mvj4v" Apr 17 23:58:15.671317 kubelet[2741]: E0417 23:58:15.669058 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fff8bdfbc-mvj4v_calico-system(47346385-5f17-4c08-b4a3-a3958d4f414b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fff8bdfbc-mvj4v_calico-system(47346385-5f17-4c08-b4a3-a3958d4f414b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6fff8bdfbc-mvj4v" podUID="47346385-5f17-4c08-b4a3-a3958d4f414b" Apr 17 23:58:15.682792 containerd[1571]: time="2026-04-17T23:58:15.682741852Z" level=error msg="Failed to destroy network for sandbox \"71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.685141 containerd[1571]: time="2026-04-17T23:58:15.684448704Z" level=error msg="encountered an error cleaning up failed sandbox \"71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.685141 containerd[1571]: time="2026-04-17T23:58:15.684516729Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6db78f5767-pb686,Uid:3c8d0225-44a8-413f-8e5f-1d6fb1b28b65,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.688207 kubelet[2741]: E0417 23:58:15.686097 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.688207 kubelet[2741]: E0417 23:58:15.686184 2741 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6db78f5767-pb686" Apr 17 23:58:15.688207 kubelet[2741]: E0417 23:58:15.686215 2741 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6db78f5767-pb686" Apr 17 23:58:15.688353 containerd[1571]: time="2026-04-17T23:58:15.686185298Z" level=error msg="Failed to destroy network for sandbox \"0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.688353 containerd[1571]: time="2026-04-17T23:58:15.687411299Z" level=error msg="encountered an error cleaning up failed sandbox \"0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.688353 containerd[1571]: time="2026-04-17T23:58:15.687569009Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rqhzd,Uid:4eb46d8d-9eca-422e-ba74-930bbc0b7688,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.688423 kubelet[2741]: E0417 23:58:15.686273 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6db78f5767-pb686_calico-system(3c8d0225-44a8-413f-8e5f-1d6fb1b28b65)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6db78f5767-pb686_calico-system(3c8d0225-44a8-413f-8e5f-1d6fb1b28b65)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6db78f5767-pb686" podUID="3c8d0225-44a8-413f-8e5f-1d6fb1b28b65" Apr 17 23:58:15.688423 kubelet[2741]: E0417 23:58:15.687871 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.688423 kubelet[2741]: E0417 23:58:15.687936 2741 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rqhzd" Apr 17 23:58:15.688686 kubelet[2741]: E0417 23:58:15.687970 2741 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rqhzd" Apr 17 23:58:15.688686 kubelet[2741]: E0417 23:58:15.688043 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rqhzd_kube-system(4eb46d8d-9eca-422e-ba74-930bbc0b7688)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rqhzd_kube-system(4eb46d8d-9eca-422e-ba74-930bbc0b7688)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rqhzd" podUID="4eb46d8d-9eca-422e-ba74-930bbc0b7688" Apr 17 23:58:15.702291 containerd[1571]: time="2026-04-17T23:58:15.702238383Z" level=error msg="Failed to destroy network for sandbox \"6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.702988 containerd[1571]: time="2026-04-17T23:58:15.702943529Z" level=error msg="encountered an error cleaning up failed sandbox \"6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.703137 containerd[1571]: time="2026-04-17T23:58:15.703088829Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fff8bdfbc-7mc8b,Uid:d9a00af0-47b3-4d37-9e23-a56a19b9db0e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.703523 kubelet[2741]: E0417 23:58:15.703491 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.703633 kubelet[2741]: E0417 23:58:15.703617 2741 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6fff8bdfbc-7mc8b" Apr 17 23:58:15.703752 kubelet[2741]: E0417 23:58:15.703725 2741 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6fff8bdfbc-7mc8b" Apr 17 23:58:15.703934 kubelet[2741]: E0417 23:58:15.703874 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fff8bdfbc-7mc8b_calico-system(d9a00af0-47b3-4d37-9e23-a56a19b9db0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fff8bdfbc-7mc8b_calico-system(d9a00af0-47b3-4d37-9e23-a56a19b9db0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6fff8bdfbc-7mc8b" podUID="d9a00af0-47b3-4d37-9e23-a56a19b9db0e" Apr 17 23:58:15.736037 containerd[1571]: time="2026-04-17T23:58:15.735948897Z" level=error msg="Failed to destroy network for sandbox \"aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.736878 containerd[1571]: time="2026-04-17T23:58:15.736403717Z" level=error msg="encountered an error cleaning up failed sandbox \"aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.736878 containerd[1571]: time="2026-04-17T23:58:15.736459631Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xt426,Uid:8b300517-4fdc-4aae-b868-e2f538976f49,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.736987 kubelet[2741]: E0417 23:58:15.736695 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:58:15.736987 kubelet[2741]: E0417 23:58:15.736772 2741 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xt426" Apr 17 23:58:15.736987 kubelet[2741]: E0417 23:58:15.736797 2741 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xt426" Apr 17 23:58:15.737071 kubelet[2741]: E0417 23:58:15.736853 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xt426_calico-system(8b300517-4fdc-4aae-b868-e2f538976f49)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xt426_calico-system(8b300517-4fdc-4aae-b868-e2f538976f49)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xt426" podUID="8b300517-4fdc-4aae-b868-e2f538976f49" Apr 17 23:58:15.771154 containerd[1571]: time="2026-04-17T23:58:15.770918974Z" level=info msg="StartContainer for \"4ed17383e70a2377d9272ed90da8e3ba4d4deebc768264e63b796827b28841b5\" returns successfully" Apr 17 23:58:16.487273 kubelet[2741]: I0417 23:58:16.487235 2741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" Apr 17 23:58:16.489648 kubelet[2741]: I0417 23:58:16.489574 2741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" Apr 17 23:58:16.490821 containerd[1571]: time="2026-04-17T23:58:16.490714603Z" level=info msg="StopPodSandbox for \"6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4\"" Apr 17 23:58:16.492473 containerd[1571]: time="2026-04-17T23:58:16.490938218Z" level=info msg="Ensure that sandbox 6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4 in task-service has been cleanup successfully" Apr 17 23:58:16.492504 kubelet[2741]: I0417 23:58:16.492342 2741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" Apr 17 23:58:16.493215 containerd[1571]: time="2026-04-17T23:58:16.492969977Z" level=info msg="StopPodSandbox for \"fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5\"" Apr 17 23:58:16.494502 containerd[1571]: time="2026-04-17T23:58:16.494440361Z" level=info msg="Ensure that sandbox fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5 in task-service has been cleanup successfully" Apr 17 23:58:16.496148 containerd[1571]: time="2026-04-17T23:58:16.495934777Z" level=info msg="StopPodSandbox for \"b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90\"" Apr 17 23:58:16.496200 containerd[1571]: time="2026-04-17T23:58:16.496172452Z" level=info msg="Ensure that sandbox b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90 in task-service has been cleanup successfully" Apr 17 23:58:16.501823 kubelet[2741]: I0417 23:58:16.501768 2741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" Apr 17 23:58:16.503451 containerd[1571]: time="2026-04-17T23:58:16.503390373Z" level=info msg="StopPodSandbox for \"71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198\"" Apr 17 23:58:16.506396 kubelet[2741]: I0417 23:58:16.505398 2741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" Apr 17 23:58:16.508329 containerd[1571]: time="2026-04-17T23:58:16.508305857Z" level=info msg="Ensure that sandbox 71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198 in task-service has been cleanup successfully" Apr 17 23:58:16.512101 containerd[1571]: time="2026-04-17T23:58:16.509031753Z" level=info msg="StopPodSandbox for \"0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc\"" Apr 17 23:58:16.512375 containerd[1571]: time="2026-04-17T23:58:16.512342615Z" level=info msg="Ensure that sandbox 0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc in task-service has been cleanup successfully" Apr 17 23:58:16.534697 kubelet[2741]: I0417 23:58:16.534668 2741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" Apr 17 23:58:16.539183 kubelet[2741]: I0417 23:58:16.539164 2741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" Apr 17 23:58:16.541586 containerd[1571]: time="2026-04-17T23:58:16.541311966Z" level=info msg="StopPodSandbox for \"ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827\"" Apr 17 23:58:16.542062 containerd[1571]: time="2026-04-17T23:58:16.541951597Z" level=info msg="Ensure that sandbox ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827 in task-service has been cleanup successfully" Apr 17 23:58:16.546280 containerd[1571]: time="2026-04-17T23:58:16.546259882Z" level=info msg="StopPodSandbox for \"aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5\"" Apr 17 23:58:16.547669 kubelet[2741]: I0417 23:58:16.546451 2741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" Apr 17 23:58:16.548643 containerd[1571]: time="2026-04-17T23:58:16.548393738Z" level=info msg="Ensure that sandbox aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5 in task-service has been cleanup successfully" Apr 17 23:58:16.557813 containerd[1571]: time="2026-04-17T23:58:16.557740425Z" level=info msg="StopPodSandbox for \"6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79\"" Apr 17 23:58:16.558659 containerd[1571]: time="2026-04-17T23:58:16.558613581Z" level=info msg="Ensure that sandbox 6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79 in task-service has been cleanup successfully" Apr 17 23:58:16.564391 kubelet[2741]: I0417 23:58:16.562979 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cb58j" podStartSLOduration=4.318802688 podStartE2EDuration="30.562938007s" podCreationTimestamp="2026-04-17 23:57:46 +0000 UTC" firstStartedPulling="2026-04-17 23:57:47.116176127 +0000 UTC m=+23.762424670" lastFinishedPulling="2026-04-17 23:58:13.360311446 +0000 UTC m=+50.006559989" observedRunningTime="2026-04-17 23:58:16.557198661 +0000 UTC m=+53.203447224" watchObservedRunningTime="2026-04-17 23:58:16.562938007 +0000 UTC m=+53.209186550" Apr 17 23:58:17.024894 containerd[1571]: 2026-04-17 23:58:16.889 [INFO][3892] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" Apr 17 23:58:17.024894 containerd[1571]: 2026-04-17 23:58:16.889 [INFO][3892] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" iface="eth0" netns="/var/run/netns/cni-877de7e0-abca-28b7-63f7-979ce3277ba9" Apr 17 23:58:17.024894 containerd[1571]: 2026-04-17 23:58:16.893 [INFO][3892] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" iface="eth0" netns="/var/run/netns/cni-877de7e0-abca-28b7-63f7-979ce3277ba9" Apr 17 23:58:17.024894 containerd[1571]: 2026-04-17 23:58:16.895 [INFO][3892] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" iface="eth0" netns="/var/run/netns/cni-877de7e0-abca-28b7-63f7-979ce3277ba9" Apr 17 23:58:17.024894 containerd[1571]: 2026-04-17 23:58:16.895 [INFO][3892] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" Apr 17 23:58:17.024894 containerd[1571]: 2026-04-17 23:58:16.895 [INFO][3892] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" Apr 17 23:58:17.024894 containerd[1571]: 2026-04-17 23:58:16.972 [INFO][3992] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" HandleID="k8s-pod-network.fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" Workload="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--mvj4v-eth0" Apr 17 23:58:17.024894 containerd[1571]: 2026-04-17 23:58:16.972 [INFO][3992] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:17.024894 containerd[1571]: 2026-04-17 23:58:16.973 [INFO][3992] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:17.024894 containerd[1571]: 2026-04-17 23:58:16.988 [WARNING][3992] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" HandleID="k8s-pod-network.fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" Workload="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--mvj4v-eth0" Apr 17 23:58:17.024894 containerd[1571]: 2026-04-17 23:58:16.988 [INFO][3992] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" HandleID="k8s-pod-network.fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" Workload="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--mvj4v-eth0" Apr 17 23:58:17.024894 containerd[1571]: 2026-04-17 23:58:16.990 [INFO][3992] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:17.024894 containerd[1571]: 2026-04-17 23:58:17.014 [INFO][3892] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" Apr 17 23:58:17.028095 containerd[1571]: time="2026-04-17T23:58:17.027952657Z" level=info msg="TearDown network for sandbox \"fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5\" successfully" Apr 17 23:58:17.028490 containerd[1571]: time="2026-04-17T23:58:17.028421467Z" level=info msg="StopPodSandbox for \"fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5\" returns successfully" Apr 17 23:58:17.034743 systemd[1]: run-netns-cni\x2d877de7e0\x2dabca\x2d28b7\x2d63f7\x2d979ce3277ba9.mount: Deactivated successfully. Apr 17 23:58:17.063625 containerd[1571]: time="2026-04-17T23:58:17.062644383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fff8bdfbc-mvj4v,Uid:47346385-5f17-4c08-b4a3-a3958d4f414b,Namespace:calico-system,Attempt:1,}" Apr 17 23:58:17.092385 containerd[1571]: 2026-04-17 23:58:16.741 [INFO][3938] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" Apr 17 23:58:17.092385 containerd[1571]: 2026-04-17 23:58:16.741 [INFO][3938] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" iface="eth0" netns="/var/run/netns/cni-f5382ec0-3af0-ec05-353f-0a2ecb7f8423" Apr 17 23:58:17.092385 containerd[1571]: 2026-04-17 23:58:16.742 [INFO][3938] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" iface="eth0" netns="/var/run/netns/cni-f5382ec0-3af0-ec05-353f-0a2ecb7f8423" Apr 17 23:58:17.092385 containerd[1571]: 2026-04-17 23:58:16.742 [INFO][3938] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" iface="eth0" netns="/var/run/netns/cni-f5382ec0-3af0-ec05-353f-0a2ecb7f8423" Apr 17 23:58:17.092385 containerd[1571]: 2026-04-17 23:58:16.742 [INFO][3938] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" Apr 17 23:58:17.092385 containerd[1571]: 2026-04-17 23:58:16.744 [INFO][3938] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" Apr 17 23:58:17.092385 containerd[1571]: 2026-04-17 23:58:16.997 [INFO][3968] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" HandleID="k8s-pod-network.6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" Workload="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--7mc8b-eth0" Apr 17 23:58:17.092385 containerd[1571]: 2026-04-17 23:58:16.997 [INFO][3968] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:17.092385 containerd[1571]: 2026-04-17 23:58:16.997 [INFO][3968] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:17.092385 containerd[1571]: 2026-04-17 23:58:17.033 [WARNING][3968] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" HandleID="k8s-pod-network.6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" Workload="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--7mc8b-eth0" Apr 17 23:58:17.092385 containerd[1571]: 2026-04-17 23:58:17.033 [INFO][3968] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" HandleID="k8s-pod-network.6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" Workload="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--7mc8b-eth0" Apr 17 23:58:17.092385 containerd[1571]: 2026-04-17 23:58:17.037 [INFO][3968] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:17.092385 containerd[1571]: 2026-04-17 23:58:17.060 [INFO][3938] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" Apr 17 23:58:17.092385 containerd[1571]: time="2026-04-17T23:58:17.092362780Z" level=info msg="TearDown network for sandbox \"6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79\" successfully" Apr 17 23:58:17.093114 containerd[1571]: time="2026-04-17T23:58:17.092396152Z" level=info msg="StopPodSandbox for \"6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79\" returns successfully" Apr 17 23:58:17.098765 systemd[1]: run-netns-cni\x2df5382ec0\x2d3af0\x2dec05\x2d353f\x2d0a2ecb7f8423.mount: Deactivated successfully. Apr 17 23:58:17.100257 containerd[1571]: time="2026-04-17T23:58:17.099931670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fff8bdfbc-7mc8b,Uid:d9a00af0-47b3-4d37-9e23-a56a19b9db0e,Namespace:calico-system,Attempt:1,}" Apr 17 23:58:17.111814 containerd[1571]: 2026-04-17 23:58:16.955 [INFO][3887] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" Apr 17 23:58:17.111814 containerd[1571]: 2026-04-17 23:58:16.956 [INFO][3887] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" iface="eth0" netns="/var/run/netns/cni-b93ea59d-ae8e-575b-69dd-d594c1ba4f83" Apr 17 23:58:17.111814 containerd[1571]: 2026-04-17 23:58:16.957 [INFO][3887] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" iface="eth0" netns="/var/run/netns/cni-b93ea59d-ae8e-575b-69dd-d594c1ba4f83" Apr 17 23:58:17.111814 containerd[1571]: 2026-04-17 23:58:16.961 [INFO][3887] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" iface="eth0" netns="/var/run/netns/cni-b93ea59d-ae8e-575b-69dd-d594c1ba4f83" Apr 17 23:58:17.111814 containerd[1571]: 2026-04-17 23:58:16.961 [INFO][3887] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" Apr 17 23:58:17.111814 containerd[1571]: 2026-04-17 23:58:16.961 [INFO][3887] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" Apr 17 23:58:17.111814 containerd[1571]: 2026-04-17 23:58:17.012 [INFO][4006] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" HandleID="k8s-pod-network.0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" Workload="172--232--15--112-k8s-coredns--674b8bbfcf--rqhzd-eth0" Apr 17 23:58:17.111814 containerd[1571]: 2026-04-17 23:58:17.012 [INFO][4006] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:17.111814 containerd[1571]: 2026-04-17 23:58:17.040 [INFO][4006] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:17.111814 containerd[1571]: 2026-04-17 23:58:17.063 [WARNING][4006] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" HandleID="k8s-pod-network.0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" Workload="172--232--15--112-k8s-coredns--674b8bbfcf--rqhzd-eth0" Apr 17 23:58:17.111814 containerd[1571]: 2026-04-17 23:58:17.063 [INFO][4006] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" HandleID="k8s-pod-network.0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" Workload="172--232--15--112-k8s-coredns--674b8bbfcf--rqhzd-eth0" Apr 17 23:58:17.111814 containerd[1571]: 2026-04-17 23:58:17.065 [INFO][4006] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:17.111814 containerd[1571]: 2026-04-17 23:58:17.099 [INFO][3887] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" Apr 17 23:58:17.117966 systemd[1]: run-netns-cni\x2db93ea59d\x2dae8e\x2d575b\x2d69dd\x2dd594c1ba4f83.mount: Deactivated successfully. Apr 17 23:58:17.120699 containerd[1571]: time="2026-04-17T23:58:17.116289786Z" level=info msg="TearDown network for sandbox \"0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc\" successfully" Apr 17 23:58:17.121218 containerd[1571]: time="2026-04-17T23:58:17.120964637Z" level=info msg="StopPodSandbox for \"0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc\" returns successfully" Apr 17 23:58:17.122525 containerd[1571]: 2026-04-17 23:58:16.682 [INFO][3882] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" Apr 17 23:58:17.122525 containerd[1571]: 2026-04-17 23:58:16.682 [INFO][3882] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" iface="eth0" netns="/var/run/netns/cni-691ab2f7-85f5-480e-66d1-bf4b4610bbd0" Apr 17 23:58:17.122525 containerd[1571]: 2026-04-17 23:58:16.683 [INFO][3882] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" iface="eth0" netns="/var/run/netns/cni-691ab2f7-85f5-480e-66d1-bf4b4610bbd0" Apr 17 23:58:17.122525 containerd[1571]: 2026-04-17 23:58:16.688 [INFO][3882] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" iface="eth0" netns="/var/run/netns/cni-691ab2f7-85f5-480e-66d1-bf4b4610bbd0" Apr 17 23:58:17.122525 containerd[1571]: 2026-04-17 23:58:16.688 [INFO][3882] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" Apr 17 23:58:17.122525 containerd[1571]: 2026-04-17 23:58:16.688 [INFO][3882] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" Apr 17 23:58:17.122525 containerd[1571]: 2026-04-17 23:58:16.983 [INFO][3955] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" HandleID="k8s-pod-network.b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" Workload="172--232--15--112-k8s-goldmane--5b85766d88--l2kf2-eth0" Apr 17 23:58:17.122525 containerd[1571]: 2026-04-17 23:58:16.983 [INFO][3955] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:17.122525 containerd[1571]: 2026-04-17 23:58:17.065 [INFO][3955] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:17.122525 containerd[1571]: 2026-04-17 23:58:17.083 [WARNING][3955] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" HandleID="k8s-pod-network.b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" Workload="172--232--15--112-k8s-goldmane--5b85766d88--l2kf2-eth0" Apr 17 23:58:17.122525 containerd[1571]: 2026-04-17 23:58:17.083 [INFO][3955] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" HandleID="k8s-pod-network.b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" Workload="172--232--15--112-k8s-goldmane--5b85766d88--l2kf2-eth0" Apr 17 23:58:17.122525 containerd[1571]: 2026-04-17 23:58:17.086 [INFO][3955] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:17.122525 containerd[1571]: 2026-04-17 23:58:17.099 [INFO][3882] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" Apr 17 23:58:17.127374 kubelet[2741]: E0417 23:58:17.126457 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:58:17.130319 containerd[1571]: time="2026-04-17T23:58:17.130267495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rqhzd,Uid:4eb46d8d-9eca-422e-ba74-930bbc0b7688,Namespace:kube-system,Attempt:1,}" Apr 17 23:58:17.131659 containerd[1571]: time="2026-04-17T23:58:17.131526233Z" level=info msg="TearDown network for sandbox \"b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90\" successfully" Apr 17 23:58:17.131659 containerd[1571]: time="2026-04-17T23:58:17.131575046Z" level=info msg="StopPodSandbox for \"b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90\" returns successfully" Apr 17 23:58:17.134210 containerd[1571]: time="2026-04-17T23:58:17.133903751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-l2kf2,Uid:e8c5738b-2e74-4bd4-9c11-4f37db2195d6,Namespace:calico-system,Attempt:1,}" Apr 17 23:58:17.163954 containerd[1571]: 2026-04-17 23:58:16.802 [INFO][3856] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" Apr 17 23:58:17.163954 containerd[1571]: 2026-04-17 23:58:16.803 [INFO][3856] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" iface="eth0" netns="/var/run/netns/cni-03e8862e-c0c8-1332-557b-059b1391aee1" Apr 17 23:58:17.163954 containerd[1571]: 2026-04-17 23:58:16.804 [INFO][3856] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" iface="eth0" netns="/var/run/netns/cni-03e8862e-c0c8-1332-557b-059b1391aee1" Apr 17 23:58:17.163954 containerd[1571]: 2026-04-17 23:58:16.826 [INFO][3856] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" iface="eth0" netns="/var/run/netns/cni-03e8862e-c0c8-1332-557b-059b1391aee1" Apr 17 23:58:17.163954 containerd[1571]: 2026-04-17 23:58:16.826 [INFO][3856] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" Apr 17 23:58:17.163954 containerd[1571]: 2026-04-17 23:58:16.826 [INFO][3856] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" Apr 17 23:58:17.163954 containerd[1571]: 2026-04-17 23:58:17.051 [INFO][3982] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" HandleID="k8s-pod-network.6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" Workload="172--232--15--112-k8s-coredns--674b8bbfcf--88tx2-eth0" Apr 17 23:58:17.163954 containerd[1571]: 2026-04-17 23:58:17.051 [INFO][3982] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:17.163954 containerd[1571]: 2026-04-17 23:58:17.090 [INFO][3982] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:17.163954 containerd[1571]: 2026-04-17 23:58:17.117 [WARNING][3982] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" HandleID="k8s-pod-network.6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" Workload="172--232--15--112-k8s-coredns--674b8bbfcf--88tx2-eth0" Apr 17 23:58:17.163954 containerd[1571]: 2026-04-17 23:58:17.121 [INFO][3982] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" HandleID="k8s-pod-network.6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" Workload="172--232--15--112-k8s-coredns--674b8bbfcf--88tx2-eth0" Apr 17 23:58:17.163954 containerd[1571]: 2026-04-17 23:58:17.134 [INFO][3982] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:17.163954 containerd[1571]: 2026-04-17 23:58:17.157 [INFO][3856] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" Apr 17 23:58:17.166454 containerd[1571]: time="2026-04-17T23:58:17.164173042Z" level=info msg="TearDown network for sandbox \"6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4\" successfully" Apr 17 23:58:17.166454 containerd[1571]: time="2026-04-17T23:58:17.164220765Z" level=info msg="StopPodSandbox for \"6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4\" returns successfully" Apr 17 23:58:17.166518 kubelet[2741]: E0417 23:58:17.164720 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:58:17.174418 containerd[1571]: time="2026-04-17T23:58:17.174360235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-88tx2,Uid:0c48190d-40ea-4a8e-95db-e61cbffe8eda,Namespace:kube-system,Attempt:1,}" Apr 17 23:58:17.185767 containerd[1571]: 2026-04-17 23:58:16.874 [INFO][3931] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" Apr 17 23:58:17.185767 containerd[1571]: 2026-04-17 23:58:16.894 [INFO][3931] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" iface="eth0" netns="/var/run/netns/cni-154e65ef-5117-a55c-a32c-0a0205d67e92" Apr 17 23:58:17.185767 containerd[1571]: 2026-04-17 23:58:16.894 [INFO][3931] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" iface="eth0" netns="/var/run/netns/cni-154e65ef-5117-a55c-a32c-0a0205d67e92" Apr 17 23:58:17.185767 containerd[1571]: 2026-04-17 23:58:16.894 [INFO][3931] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" iface="eth0" netns="/var/run/netns/cni-154e65ef-5117-a55c-a32c-0a0205d67e92" Apr 17 23:58:17.185767 containerd[1571]: 2026-04-17 23:58:16.895 [INFO][3931] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" Apr 17 23:58:17.185767 containerd[1571]: 2026-04-17 23:58:16.895 [INFO][3931] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" Apr 17 23:58:17.185767 containerd[1571]: 2026-04-17 23:58:17.105 [INFO][3993] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" HandleID="k8s-pod-network.aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" Workload="172--232--15--112-k8s-csi--node--driver--xt426-eth0" Apr 17 23:58:17.185767 containerd[1571]: 2026-04-17 23:58:17.105 [INFO][3993] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:17.185767 containerd[1571]: 2026-04-17 23:58:17.143 [INFO][3993] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:17.185767 containerd[1571]: 2026-04-17 23:58:17.156 [WARNING][3993] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" HandleID="k8s-pod-network.aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" Workload="172--232--15--112-k8s-csi--node--driver--xt426-eth0" Apr 17 23:58:17.185767 containerd[1571]: 2026-04-17 23:58:17.156 [INFO][3993] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" HandleID="k8s-pod-network.aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" Workload="172--232--15--112-k8s-csi--node--driver--xt426-eth0" Apr 17 23:58:17.185767 containerd[1571]: 2026-04-17 23:58:17.161 [INFO][3993] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:17.185767 containerd[1571]: 2026-04-17 23:58:17.175 [INFO][3931] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" Apr 17 23:58:17.186329 containerd[1571]: time="2026-04-17T23:58:17.185991267Z" level=info msg="TearDown network for sandbox \"aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5\" successfully" Apr 17 23:58:17.186329 containerd[1571]: time="2026-04-17T23:58:17.186018829Z" level=info msg="StopPodSandbox for \"aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5\" returns successfully" Apr 17 23:58:17.189305 containerd[1571]: time="2026-04-17T23:58:17.189269211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xt426,Uid:8b300517-4fdc-4aae-b868-e2f538976f49,Namespace:calico-system,Attempt:1,}" Apr 17 23:58:17.262183 containerd[1571]: 2026-04-17 23:58:17.074 [INFO][3945] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" Apr 17 23:58:17.262183 containerd[1571]: 2026-04-17 23:58:17.077 [INFO][3945] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" iface="eth0" netns="/var/run/netns/cni-8d3a770c-169b-8a95-8269-17a42596d329" Apr 17 23:58:17.262183 containerd[1571]: 2026-04-17 23:58:17.078 [INFO][3945] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" iface="eth0" netns="/var/run/netns/cni-8d3a770c-169b-8a95-8269-17a42596d329" Apr 17 23:58:17.262183 containerd[1571]: 2026-04-17 23:58:17.081 [INFO][3945] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" iface="eth0" netns="/var/run/netns/cni-8d3a770c-169b-8a95-8269-17a42596d329" Apr 17 23:58:17.262183 containerd[1571]: 2026-04-17 23:58:17.081 [INFO][3945] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" Apr 17 23:58:17.262183 containerd[1571]: 2026-04-17 23:58:17.081 [INFO][3945] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" Apr 17 23:58:17.262183 containerd[1571]: 2026-04-17 23:58:17.182 [INFO][4028] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" HandleID="k8s-pod-network.ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" Workload="172--232--15--112-k8s-calico--kube--controllers--5b464cddd7--wff8q-eth0" Apr 17 23:58:17.262183 containerd[1571]: 2026-04-17 23:58:17.183 [INFO][4028] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:17.262183 containerd[1571]: 2026-04-17 23:58:17.183 [INFO][4028] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:17.262183 containerd[1571]: 2026-04-17 23:58:17.210 [WARNING][4028] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" HandleID="k8s-pod-network.ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" Workload="172--232--15--112-k8s-calico--kube--controllers--5b464cddd7--wff8q-eth0" Apr 17 23:58:17.262183 containerd[1571]: 2026-04-17 23:58:17.210 [INFO][4028] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" HandleID="k8s-pod-network.ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" Workload="172--232--15--112-k8s-calico--kube--controllers--5b464cddd7--wff8q-eth0" Apr 17 23:58:17.262183 containerd[1571]: 2026-04-17 23:58:17.216 [INFO][4028] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:17.262183 containerd[1571]: 2026-04-17 23:58:17.223 [INFO][3945] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" Apr 17 23:58:17.265427 containerd[1571]: time="2026-04-17T23:58:17.265336548Z" level=info msg="TearDown network for sandbox \"ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827\" successfully" Apr 17 23:58:17.265509 containerd[1571]: time="2026-04-17T23:58:17.265428573Z" level=info msg="StopPodSandbox for \"ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827\" returns successfully" Apr 17 23:58:17.273660 containerd[1571]: time="2026-04-17T23:58:17.273599881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b464cddd7-wff8q,Uid:83eecf55-6f3a-4d4f-964b-d260567b14a7,Namespace:calico-system,Attempt:1,}" Apr 17 23:58:17.293195 containerd[1571]: 2026-04-17 23:58:16.958 [INFO][3888] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" Apr 17 23:58:17.293195 containerd[1571]: 2026-04-17 23:58:16.958 [INFO][3888] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" iface="eth0" netns="/var/run/netns/cni-af929308-d889-f03b-3c41-35082bf100a8" Apr 17 23:58:17.293195 containerd[1571]: 2026-04-17 23:58:16.959 [INFO][3888] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" iface="eth0" netns="/var/run/netns/cni-af929308-d889-f03b-3c41-35082bf100a8" Apr 17 23:58:17.293195 containerd[1571]: 2026-04-17 23:58:16.960 [INFO][3888] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" iface="eth0" netns="/var/run/netns/cni-af929308-d889-f03b-3c41-35082bf100a8" Apr 17 23:58:17.293195 containerd[1571]: 2026-04-17 23:58:16.960 [INFO][3888] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" Apr 17 23:58:17.293195 containerd[1571]: 2026-04-17 23:58:16.960 [INFO][3888] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" Apr 17 23:58:17.293195 containerd[1571]: 2026-04-17 23:58:17.204 [INFO][4007] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" HandleID="k8s-pod-network.71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" Workload="172--232--15--112-k8s-whisker--6db78f5767--pb686-eth0" Apr 17 23:58:17.293195 containerd[1571]: 2026-04-17 23:58:17.204 [INFO][4007] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:17.293195 containerd[1571]: 2026-04-17 23:58:17.216 [INFO][4007] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:17.293195 containerd[1571]: 2026-04-17 23:58:17.232 [WARNING][4007] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" HandleID="k8s-pod-network.71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" Workload="172--232--15--112-k8s-whisker--6db78f5767--pb686-eth0" Apr 17 23:58:17.293195 containerd[1571]: 2026-04-17 23:58:17.233 [INFO][4007] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" HandleID="k8s-pod-network.71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" Workload="172--232--15--112-k8s-whisker--6db78f5767--pb686-eth0" Apr 17 23:58:17.293195 containerd[1571]: 2026-04-17 23:58:17.256 [INFO][4007] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:17.293195 containerd[1571]: 2026-04-17 23:58:17.284 [INFO][3888] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" Apr 17 23:58:17.298616 containerd[1571]: time="2026-04-17T23:58:17.298578803Z" level=info msg="TearDown network for sandbox \"71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198\" successfully" Apr 17 23:58:17.299029 containerd[1571]: time="2026-04-17T23:58:17.299013120Z" level=info msg="StopPodSandbox for \"71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198\" returns successfully" Apr 17 23:58:17.330076 kubelet[2741]: I0417 23:58:17.320275 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97tzg\" (UniqueName: \"kubernetes.io/projected/3c8d0225-44a8-413f-8e5f-1d6fb1b28b65-kube-api-access-97tzg\") pod \"3c8d0225-44a8-413f-8e5f-1d6fb1b28b65\" (UID: \"3c8d0225-44a8-413f-8e5f-1d6fb1b28b65\") " Apr 17 23:58:17.330076 kubelet[2741]: I0417 23:58:17.320324 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3c8d0225-44a8-413f-8e5f-1d6fb1b28b65-nginx-config\") pod \"3c8d0225-44a8-413f-8e5f-1d6fb1b28b65\" (UID: \"3c8d0225-44a8-413f-8e5f-1d6fb1b28b65\") " Apr 17 23:58:17.330076 kubelet[2741]: I0417 23:58:17.320347 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c8d0225-44a8-413f-8e5f-1d6fb1b28b65-whisker-ca-bundle\") pod \"3c8d0225-44a8-413f-8e5f-1d6fb1b28b65\" (UID: \"3c8d0225-44a8-413f-8e5f-1d6fb1b28b65\") " Apr 17 23:58:17.330076 kubelet[2741]: I0417 23:58:17.320380 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3c8d0225-44a8-413f-8e5f-1d6fb1b28b65-whisker-backend-key-pair\") pod \"3c8d0225-44a8-413f-8e5f-1d6fb1b28b65\" (UID: \"3c8d0225-44a8-413f-8e5f-1d6fb1b28b65\") " Apr 17 23:58:17.337533 kubelet[2741]: I0417 23:58:17.337466 2741 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c8d0225-44a8-413f-8e5f-1d6fb1b28b65-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3c8d0225-44a8-413f-8e5f-1d6fb1b28b65" (UID: "3c8d0225-44a8-413f-8e5f-1d6fb1b28b65"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:58:17.338466 kubelet[2741]: I0417 23:58:17.338425 2741 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c8d0225-44a8-413f-8e5f-1d6fb1b28b65-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "3c8d0225-44a8-413f-8e5f-1d6fb1b28b65" (UID: "3c8d0225-44a8-413f-8e5f-1d6fb1b28b65"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:58:17.355238 kubelet[2741]: I0417 23:58:17.354629 2741 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8d0225-44a8-413f-8e5f-1d6fb1b28b65-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3c8d0225-44a8-413f-8e5f-1d6fb1b28b65" (UID: "3c8d0225-44a8-413f-8e5f-1d6fb1b28b65"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 23:58:17.356862 kubelet[2741]: I0417 23:58:17.355841 2741 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c8d0225-44a8-413f-8e5f-1d6fb1b28b65-kube-api-access-97tzg" (OuterVolumeSpecName: "kube-api-access-97tzg") pod "3c8d0225-44a8-413f-8e5f-1d6fb1b28b65" (UID: "3c8d0225-44a8-413f-8e5f-1d6fb1b28b65"). InnerVolumeSpecName "kube-api-access-97tzg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:58:17.424194 kubelet[2741]: I0417 23:58:17.422895 2741 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3c8d0225-44a8-413f-8e5f-1d6fb1b28b65-whisker-backend-key-pair\") on node \"172-232-15-112\" DevicePath \"\"" Apr 17 23:58:17.424194 kubelet[2741]: I0417 23:58:17.422941 2741 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-97tzg\" (UniqueName: \"kubernetes.io/projected/3c8d0225-44a8-413f-8e5f-1d6fb1b28b65-kube-api-access-97tzg\") on node \"172-232-15-112\" DevicePath \"\"" Apr 17 23:58:17.424194 kubelet[2741]: I0417 23:58:17.422956 2741 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3c8d0225-44a8-413f-8e5f-1d6fb1b28b65-nginx-config\") on node \"172-232-15-112\" DevicePath \"\"" Apr 17 23:58:17.424194 kubelet[2741]: I0417 23:58:17.422965 2741 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c8d0225-44a8-413f-8e5f-1d6fb1b28b65-whisker-ca-bundle\") on node \"172-232-15-112\" DevicePath \"\"" Apr 17 23:58:17.729789 kubelet[2741]: I0417 23:58:17.729437 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc5f4\" (UniqueName: \"kubernetes.io/projected/001bd93c-0762-4ea4-a9b6-cb368ce16ca6-kube-api-access-xc5f4\") pod \"whisker-59bdfdf7b8-ksvgv\" (UID: \"001bd93c-0762-4ea4-a9b6-cb368ce16ca6\") " pod="calico-system/whisker-59bdfdf7b8-ksvgv" Apr 17 23:58:17.734039 kubelet[2741]: I0417 23:58:17.732751 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/001bd93c-0762-4ea4-a9b6-cb368ce16ca6-nginx-config\") pod \"whisker-59bdfdf7b8-ksvgv\" (UID: \"001bd93c-0762-4ea4-a9b6-cb368ce16ca6\") " pod="calico-system/whisker-59bdfdf7b8-ksvgv" Apr 17 23:58:17.734039 kubelet[2741]: I0417 23:58:17.732822 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/001bd93c-0762-4ea4-a9b6-cb368ce16ca6-whisker-backend-key-pair\") pod \"whisker-59bdfdf7b8-ksvgv\" (UID: \"001bd93c-0762-4ea4-a9b6-cb368ce16ca6\") " pod="calico-system/whisker-59bdfdf7b8-ksvgv" Apr 17 23:58:17.734039 kubelet[2741]: I0417 23:58:17.732845 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/001bd93c-0762-4ea4-a9b6-cb368ce16ca6-whisker-ca-bundle\") pod \"whisker-59bdfdf7b8-ksvgv\" (UID: \"001bd93c-0762-4ea4-a9b6-cb368ce16ca6\") " pod="calico-system/whisker-59bdfdf7b8-ksvgv" Apr 17 23:58:18.025398 containerd[1571]: time="2026-04-17T23:58:18.024493879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59bdfdf7b8-ksvgv,Uid:001bd93c-0762-4ea4-a9b6-cb368ce16ca6,Namespace:calico-system,Attempt:0,}" Apr 17 23:58:18.050915 systemd[1]: run-netns-cni\x2d154e65ef\x2d5117\x2da55c\x2da32c\x2d0a0205d67e92.mount: Deactivated successfully. Apr 17 23:58:18.053040 systemd[1]: run-netns-cni\x2d03e8862e\x2dc0c8\x2d1332\x2d557b\x2d059b1391aee1.mount: Deactivated successfully. Apr 17 23:58:18.053400 systemd[1]: run-netns-cni\x2d691ab2f7\x2d85f5\x2d480e\x2d66d1\x2dbf4b4610bbd0.mount: Deactivated successfully. Apr 17 23:58:18.053573 systemd[1]: run-netns-cni\x2daf929308\x2dd889\x2df03b\x2d3c41\x2d35082bf100a8.mount: Deactivated successfully. Apr 17 23:58:18.054260 systemd[1]: run-netns-cni\x2d8d3a770c\x2d169b\x2d8a95\x2d8269\x2d17a42596d329.mount: Deactivated successfully. Apr 17 23:58:18.054405 systemd[1]: var-lib-kubelet-pods-3c8d0225\x2d44a8\x2d413f\x2d8e5f\x2d1d6fb1b28b65-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 17 23:58:18.054551 systemd[1]: var-lib-kubelet-pods-3c8d0225\x2d44a8\x2d413f\x2d8e5f\x2d1d6fb1b28b65-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d97tzg.mount: Deactivated successfully. Apr 17 23:58:18.502705 systemd-networkd[1236]: cali119c37c994f: Link UP Apr 17 23:58:18.502935 systemd-networkd[1236]: cali119c37c994f: Gained carrier Apr 17 23:58:18.583107 containerd[1571]: 2026-04-17 23:58:17.365 [ERROR][4058] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:58:18.583107 containerd[1571]: 2026-04-17 23:58:17.458 [INFO][4058] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--15--112-k8s-goldmane--5b85766d88--l2kf2-eth0 goldmane-5b85766d88- calico-system e8c5738b-2e74-4bd4-9c11-4f37db2195d6 937 0 2026-04-17 23:57:45 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-232-15-112 goldmane-5b85766d88-l2kf2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali119c37c994f [] [] }} ContainerID="abd001759dac0d2b45d96c156e36424f5ee4f5570966c6d79ce225fd8923bd04" Namespace="calico-system" Pod="goldmane-5b85766d88-l2kf2" WorkloadEndpoint="172--232--15--112-k8s-goldmane--5b85766d88--l2kf2-" Apr 17 23:58:18.583107 containerd[1571]: 2026-04-17 23:58:17.460 [INFO][4058] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="abd001759dac0d2b45d96c156e36424f5ee4f5570966c6d79ce225fd8923bd04" Namespace="calico-system" Pod="goldmane-5b85766d88-l2kf2" WorkloadEndpoint="172--232--15--112-k8s-goldmane--5b85766d88--l2kf2-eth0" Apr 17 23:58:18.583107 containerd[1571]: 2026-04-17 23:58:17.994 [INFO][4206] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="abd001759dac0d2b45d96c156e36424f5ee4f5570966c6d79ce225fd8923bd04" HandleID="k8s-pod-network.abd001759dac0d2b45d96c156e36424f5ee4f5570966c6d79ce225fd8923bd04" Workload="172--232--15--112-k8s-goldmane--5b85766d88--l2kf2-eth0" Apr 17 23:58:18.583107 containerd[1571]: 2026-04-17 23:58:18.111 [INFO][4206] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="abd001759dac0d2b45d96c156e36424f5ee4f5570966c6d79ce225fd8923bd04" HandleID="k8s-pod-network.abd001759dac0d2b45d96c156e36424f5ee4f5570966c6d79ce225fd8923bd04" Workload="172--232--15--112-k8s-goldmane--5b85766d88--l2kf2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdc00), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-15-112", "pod":"goldmane-5b85766d88-l2kf2", "timestamp":"2026-04-17 23:58:17.994890629 +0000 UTC"}, Hostname:"172-232-15-112", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00036eb00)} Apr 17 23:58:18.583107 containerd[1571]: 2026-04-17 23:58:18.111 [INFO][4206] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:18.583107 containerd[1571]: 2026-04-17 23:58:18.111 [INFO][4206] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:18.583107 containerd[1571]: 2026-04-17 23:58:18.111 [INFO][4206] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-15-112' Apr 17 23:58:18.583107 containerd[1571]: 2026-04-17 23:58:18.143 [INFO][4206] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.abd001759dac0d2b45d96c156e36424f5ee4f5570966c6d79ce225fd8923bd04" host="172-232-15-112" Apr 17 23:58:18.583107 containerd[1571]: 2026-04-17 23:58:18.207 [INFO][4206] ipam/ipam.go 409: Looking up existing affinities for host host="172-232-15-112" Apr 17 23:58:18.583107 containerd[1571]: 2026-04-17 23:58:18.229 [INFO][4206] ipam/ipam.go 526: Trying affinity for 192.168.119.64/26 host="172-232-15-112" Apr 17 23:58:18.583107 containerd[1571]: 2026-04-17 23:58:18.249 [INFO][4206] ipam/ipam.go 160: Attempting to load block cidr=192.168.119.64/26 host="172-232-15-112" Apr 17 23:58:18.583107 containerd[1571]: 2026-04-17 23:58:18.275 [INFO][4206] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.119.64/26 host="172-232-15-112" Apr 17 23:58:18.583107 containerd[1571]: 2026-04-17 23:58:18.275 [INFO][4206] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.119.64/26 handle="k8s-pod-network.abd001759dac0d2b45d96c156e36424f5ee4f5570966c6d79ce225fd8923bd04" host="172-232-15-112" Apr 17 23:58:18.583107 containerd[1571]: 2026-04-17 23:58:18.295 [INFO][4206] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.abd001759dac0d2b45d96c156e36424f5ee4f5570966c6d79ce225fd8923bd04 Apr 17 23:58:18.583107 containerd[1571]: 2026-04-17 23:58:18.305 [INFO][4206] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.119.64/26 handle="k8s-pod-network.abd001759dac0d2b45d96c156e36424f5ee4f5570966c6d79ce225fd8923bd04" host="172-232-15-112" Apr 17 23:58:18.583107 containerd[1571]: 2026-04-17 23:58:18.354 [INFO][4206] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.119.65/26] block=192.168.119.64/26 handle="k8s-pod-network.abd001759dac0d2b45d96c156e36424f5ee4f5570966c6d79ce225fd8923bd04" host="172-232-15-112" Apr 17 23:58:18.583107 containerd[1571]: 2026-04-17 23:58:18.354 [INFO][4206] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.119.65/26] handle="k8s-pod-network.abd001759dac0d2b45d96c156e36424f5ee4f5570966c6d79ce225fd8923bd04" host="172-232-15-112" Apr 17 23:58:18.583107 containerd[1571]: 2026-04-17 23:58:18.354 [INFO][4206] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:18.583107 containerd[1571]: 2026-04-17 23:58:18.354 [INFO][4206] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.119.65/26] IPv6=[] ContainerID="abd001759dac0d2b45d96c156e36424f5ee4f5570966c6d79ce225fd8923bd04" HandleID="k8s-pod-network.abd001759dac0d2b45d96c156e36424f5ee4f5570966c6d79ce225fd8923bd04" Workload="172--232--15--112-k8s-goldmane--5b85766d88--l2kf2-eth0" Apr 17 23:58:18.583926 containerd[1571]: 2026-04-17 23:58:18.401 [INFO][4058] cni-plugin/k8s.go 418: Populated endpoint ContainerID="abd001759dac0d2b45d96c156e36424f5ee4f5570966c6d79ce225fd8923bd04" Namespace="calico-system" Pod="goldmane-5b85766d88-l2kf2" WorkloadEndpoint="172--232--15--112-k8s-goldmane--5b85766d88--l2kf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-goldmane--5b85766d88--l2kf2-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"e8c5738b-2e74-4bd4-9c11-4f37db2195d6", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"", Pod:"goldmane-5b85766d88-l2kf2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.119.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali119c37c994f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:18.583926 containerd[1571]: 2026-04-17 23:58:18.402 [INFO][4058] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.65/32] ContainerID="abd001759dac0d2b45d96c156e36424f5ee4f5570966c6d79ce225fd8923bd04" Namespace="calico-system" Pod="goldmane-5b85766d88-l2kf2" WorkloadEndpoint="172--232--15--112-k8s-goldmane--5b85766d88--l2kf2-eth0" Apr 17 23:58:18.583926 containerd[1571]: 2026-04-17 23:58:18.402 [INFO][4058] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali119c37c994f ContainerID="abd001759dac0d2b45d96c156e36424f5ee4f5570966c6d79ce225fd8923bd04" Namespace="calico-system" Pod="goldmane-5b85766d88-l2kf2" WorkloadEndpoint="172--232--15--112-k8s-goldmane--5b85766d88--l2kf2-eth0" Apr 17 23:58:18.583926 containerd[1571]: 2026-04-17 23:58:18.508 [INFO][4058] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="abd001759dac0d2b45d96c156e36424f5ee4f5570966c6d79ce225fd8923bd04" Namespace="calico-system" Pod="goldmane-5b85766d88-l2kf2" WorkloadEndpoint="172--232--15--112-k8s-goldmane--5b85766d88--l2kf2-eth0" Apr 17 23:58:18.583926 containerd[1571]: 2026-04-17 23:58:18.514 [INFO][4058] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="abd001759dac0d2b45d96c156e36424f5ee4f5570966c6d79ce225fd8923bd04" Namespace="calico-system" Pod="goldmane-5b85766d88-l2kf2" WorkloadEndpoint="172--232--15--112-k8s-goldmane--5b85766d88--l2kf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-goldmane--5b85766d88--l2kf2-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"e8c5738b-2e74-4bd4-9c11-4f37db2195d6", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"abd001759dac0d2b45d96c156e36424f5ee4f5570966c6d79ce225fd8923bd04", Pod:"goldmane-5b85766d88-l2kf2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.119.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali119c37c994f", MAC:"c2:1e:af:42:a2:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:18.583926 containerd[1571]: 2026-04-17 23:58:18.549 [INFO][4058] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="abd001759dac0d2b45d96c156e36424f5ee4f5570966c6d79ce225fd8923bd04" Namespace="calico-system" Pod="goldmane-5b85766d88-l2kf2" WorkloadEndpoint="172--232--15--112-k8s-goldmane--5b85766d88--l2kf2-eth0" Apr 17 23:58:18.642824 systemd-networkd[1236]: cali1d27b58af55: Link UP Apr 17 23:58:18.649548 systemd-networkd[1236]: cali1d27b58af55: Gained carrier Apr 17 23:58:18.744807 systemd-networkd[1236]: cali825d1771b61: Link UP Apr 17 23:58:18.753573 systemd-networkd[1236]: cali825d1771b61: Gained carrier Apr 17 23:58:18.798142 containerd[1571]: 2026-04-17 23:58:17.329 [ERROR][4032] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:58:18.798142 containerd[1571]: 2026-04-17 23:58:17.435 [INFO][4032] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--mvj4v-eth0 calico-apiserver-6fff8bdfbc- calico-system 47346385-5f17-4c08-b4a3-a3958d4f414b 941 0 2026-04-17 23:57:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fff8bdfbc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-232-15-112 calico-apiserver-6fff8bdfbc-mvj4v eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali825d1771b61 [] [] }} ContainerID="cdab7321252d17c2b83bbd2e5a869ee49924681cd3a5da45cedde3052125551b" Namespace="calico-system" Pod="calico-apiserver-6fff8bdfbc-mvj4v" WorkloadEndpoint="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--mvj4v-" Apr 17 23:58:18.798142 containerd[1571]: 2026-04-17 23:58:17.445 [INFO][4032] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cdab7321252d17c2b83bbd2e5a869ee49924681cd3a5da45cedde3052125551b" Namespace="calico-system" Pod="calico-apiserver-6fff8bdfbc-mvj4v" WorkloadEndpoint="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--mvj4v-eth0" Apr 17 23:58:18.798142 containerd[1571]: 2026-04-17 23:58:18.189 [INFO][4192] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cdab7321252d17c2b83bbd2e5a869ee49924681cd3a5da45cedde3052125551b" HandleID="k8s-pod-network.cdab7321252d17c2b83bbd2e5a869ee49924681cd3a5da45cedde3052125551b" Workload="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--mvj4v-eth0" Apr 17 23:58:18.798142 containerd[1571]: 2026-04-17 23:58:18.238 [INFO][4192] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cdab7321252d17c2b83bbd2e5a869ee49924681cd3a5da45cedde3052125551b" HandleID="k8s-pod-network.cdab7321252d17c2b83bbd2e5a869ee49924681cd3a5da45cedde3052125551b" Workload="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--mvj4v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003716e0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-15-112", "pod":"calico-apiserver-6fff8bdfbc-mvj4v", "timestamp":"2026-04-17 23:58:18.189271968 +0000 UTC"}, Hostname:"172-232-15-112", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00042c000)} Apr 17 23:58:18.798142 containerd[1571]: 2026-04-17 23:58:18.238 [INFO][4192] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:18.798142 containerd[1571]: 2026-04-17 23:58:18.562 [INFO][4192] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:18.798142 containerd[1571]: 2026-04-17 23:58:18.562 [INFO][4192] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-15-112' Apr 17 23:58:18.798142 containerd[1571]: 2026-04-17 23:58:18.594 [INFO][4192] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cdab7321252d17c2b83bbd2e5a869ee49924681cd3a5da45cedde3052125551b" host="172-232-15-112" Apr 17 23:58:18.798142 containerd[1571]: 2026-04-17 23:58:18.619 [INFO][4192] ipam/ipam.go 409: Looking up existing affinities for host host="172-232-15-112" Apr 17 23:58:18.798142 containerd[1571]: 2026-04-17 23:58:18.669 [INFO][4192] ipam/ipam.go 526: Trying affinity for 192.168.119.64/26 host="172-232-15-112" Apr 17 23:58:18.798142 containerd[1571]: 2026-04-17 23:58:18.678 [INFO][4192] ipam/ipam.go 160: Attempting to load block cidr=192.168.119.64/26 host="172-232-15-112" Apr 17 23:58:18.798142 containerd[1571]: 2026-04-17 23:58:18.694 [INFO][4192] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.119.64/26 host="172-232-15-112" Apr 17 23:58:18.798142 containerd[1571]: 2026-04-17 23:58:18.700 [INFO][4192] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.119.64/26 handle="k8s-pod-network.cdab7321252d17c2b83bbd2e5a869ee49924681cd3a5da45cedde3052125551b" host="172-232-15-112" Apr 17 23:58:18.798142 containerd[1571]: 2026-04-17 23:58:18.702 [INFO][4192] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cdab7321252d17c2b83bbd2e5a869ee49924681cd3a5da45cedde3052125551b Apr 17 23:58:18.798142 containerd[1571]: 2026-04-17 23:58:18.709 [INFO][4192] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.119.64/26 handle="k8s-pod-network.cdab7321252d17c2b83bbd2e5a869ee49924681cd3a5da45cedde3052125551b" host="172-232-15-112" Apr 17 23:58:18.798142 containerd[1571]: 2026-04-17 23:58:18.717 [INFO][4192] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.119.67/26] block=192.168.119.64/26 handle="k8s-pod-network.cdab7321252d17c2b83bbd2e5a869ee49924681cd3a5da45cedde3052125551b" host="172-232-15-112" Apr 17 23:58:18.798142 containerd[1571]: 2026-04-17 23:58:18.717 [INFO][4192] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.119.67/26] handle="k8s-pod-network.cdab7321252d17c2b83bbd2e5a869ee49924681cd3a5da45cedde3052125551b" host="172-232-15-112" Apr 17 23:58:18.798142 containerd[1571]: 2026-04-17 23:58:18.718 [INFO][4192] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:18.798142 containerd[1571]: 2026-04-17 23:58:18.718 [INFO][4192] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.119.67/26] IPv6=[] ContainerID="cdab7321252d17c2b83bbd2e5a869ee49924681cd3a5da45cedde3052125551b" HandleID="k8s-pod-network.cdab7321252d17c2b83bbd2e5a869ee49924681cd3a5da45cedde3052125551b" Workload="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--mvj4v-eth0" Apr 17 23:58:18.798860 containerd[1571]: 2026-04-17 23:58:18.735 [INFO][4032] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cdab7321252d17c2b83bbd2e5a869ee49924681cd3a5da45cedde3052125551b" Namespace="calico-system" Pod="calico-apiserver-6fff8bdfbc-mvj4v" WorkloadEndpoint="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--mvj4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--mvj4v-eth0", GenerateName:"calico-apiserver-6fff8bdfbc-", Namespace:"calico-system", SelfLink:"", UID:"47346385-5f17-4c08-b4a3-a3958d4f414b", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fff8bdfbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"", Pod:"calico-apiserver-6fff8bdfbc-mvj4v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali825d1771b61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:18.798860 containerd[1571]: 2026-04-17 23:58:18.735 [INFO][4032] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.67/32] ContainerID="cdab7321252d17c2b83bbd2e5a869ee49924681cd3a5da45cedde3052125551b" Namespace="calico-system" Pod="calico-apiserver-6fff8bdfbc-mvj4v" WorkloadEndpoint="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--mvj4v-eth0" Apr 17 23:58:18.798860 containerd[1571]: 2026-04-17 23:58:18.735 [INFO][4032] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali825d1771b61 ContainerID="cdab7321252d17c2b83bbd2e5a869ee49924681cd3a5da45cedde3052125551b" Namespace="calico-system" Pod="calico-apiserver-6fff8bdfbc-mvj4v" WorkloadEndpoint="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--mvj4v-eth0" Apr 17 23:58:18.798860 containerd[1571]: 2026-04-17 23:58:18.751 [INFO][4032] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cdab7321252d17c2b83bbd2e5a869ee49924681cd3a5da45cedde3052125551b" Namespace="calico-system" Pod="calico-apiserver-6fff8bdfbc-mvj4v" WorkloadEndpoint="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--mvj4v-eth0" Apr 17 23:58:18.798860 containerd[1571]: 2026-04-17 23:58:18.753 [INFO][4032] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cdab7321252d17c2b83bbd2e5a869ee49924681cd3a5da45cedde3052125551b" Namespace="calico-system" Pod="calico-apiserver-6fff8bdfbc-mvj4v" WorkloadEndpoint="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--mvj4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--mvj4v-eth0", GenerateName:"calico-apiserver-6fff8bdfbc-", Namespace:"calico-system", SelfLink:"", UID:"47346385-5f17-4c08-b4a3-a3958d4f414b", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fff8bdfbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"cdab7321252d17c2b83bbd2e5a869ee49924681cd3a5da45cedde3052125551b", Pod:"calico-apiserver-6fff8bdfbc-mvj4v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali825d1771b61", MAC:"ee:b8:25:e7:7e:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:18.798860 containerd[1571]: 2026-04-17 23:58:18.778 [INFO][4032] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cdab7321252d17c2b83bbd2e5a869ee49924681cd3a5da45cedde3052125551b" Namespace="calico-system" Pod="calico-apiserver-6fff8bdfbc-mvj4v" WorkloadEndpoint="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--mvj4v-eth0" Apr 17 23:58:18.809533 systemd-networkd[1236]: calide41ec509a9: Link UP Apr 17 23:58:18.813035 systemd-networkd[1236]: calide41ec509a9: Gained carrier Apr 17 23:58:18.840724 containerd[1571]: time="2026-04-17T23:58:18.839395544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:58:18.840724 containerd[1571]: time="2026-04-17T23:58:18.839511751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:58:18.840724 containerd[1571]: time="2026-04-17T23:58:18.839523441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:18.840724 containerd[1571]: time="2026-04-17T23:58:18.839662910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:18.850728 containerd[1571]: 2026-04-17 23:58:17.401 [ERROR][4075] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:58:18.850728 containerd[1571]: 2026-04-17 23:58:17.512 [INFO][4075] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--15--112-k8s-coredns--674b8bbfcf--rqhzd-eth0 coredns-674b8bbfcf- kube-system 4eb46d8d-9eca-422e-ba74-930bbc0b7688 942 0 2026-04-17 23:57:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-232-15-112 coredns-674b8bbfcf-rqhzd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calide41ec509a9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e0223b6ea5028add07969a01644dfb8f413b662e7d0356a14c76295a95bd5dad" Namespace="kube-system" Pod="coredns-674b8bbfcf-rqhzd" WorkloadEndpoint="172--232--15--112-k8s-coredns--674b8bbfcf--rqhzd-" Apr 17 23:58:18.850728 containerd[1571]: 2026-04-17 23:58:17.512 [INFO][4075] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e0223b6ea5028add07969a01644dfb8f413b662e7d0356a14c76295a95bd5dad" Namespace="kube-system" Pod="coredns-674b8bbfcf-rqhzd" WorkloadEndpoint="172--232--15--112-k8s-coredns--674b8bbfcf--rqhzd-eth0" Apr 17 23:58:18.850728 containerd[1571]: 2026-04-17 23:58:18.201 [INFO][4208] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e0223b6ea5028add07969a01644dfb8f413b662e7d0356a14c76295a95bd5dad" HandleID="k8s-pod-network.e0223b6ea5028add07969a01644dfb8f413b662e7d0356a14c76295a95bd5dad" Workload="172--232--15--112-k8s-coredns--674b8bbfcf--rqhzd-eth0" Apr 17 23:58:18.850728 containerd[1571]: 2026-04-17 23:58:18.238 [INFO][4208] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e0223b6ea5028add07969a01644dfb8f413b662e7d0356a14c76295a95bd5dad" HandleID="k8s-pod-network.e0223b6ea5028add07969a01644dfb8f413b662e7d0356a14c76295a95bd5dad" Workload="172--232--15--112-k8s-coredns--674b8bbfcf--rqhzd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003781f0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-232-15-112", "pod":"coredns-674b8bbfcf-rqhzd", "timestamp":"2026-04-17 23:58:18.20104012 +0000 UTC"}, Hostname:"172-232-15-112", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002f4580)} Apr 17 23:58:18.850728 containerd[1571]: 2026-04-17 23:58:18.238 [INFO][4208] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:18.850728 containerd[1571]: 2026-04-17 23:58:18.718 [INFO][4208] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:18.850728 containerd[1571]: 2026-04-17 23:58:18.720 [INFO][4208] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-15-112' Apr 17 23:58:18.850728 containerd[1571]: 2026-04-17 23:58:18.734 [INFO][4208] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e0223b6ea5028add07969a01644dfb8f413b662e7d0356a14c76295a95bd5dad" host="172-232-15-112" Apr 17 23:58:18.850728 containerd[1571]: 2026-04-17 23:58:18.752 [INFO][4208] ipam/ipam.go 409: Looking up existing affinities for host host="172-232-15-112" Apr 17 23:58:18.850728 containerd[1571]: 2026-04-17 23:58:18.764 [INFO][4208] ipam/ipam.go 526: Trying affinity for 192.168.119.64/26 host="172-232-15-112" Apr 17 23:58:18.850728 containerd[1571]: 2026-04-17 23:58:18.774 [INFO][4208] ipam/ipam.go 160: Attempting to load block cidr=192.168.119.64/26 host="172-232-15-112" Apr 17 23:58:18.850728 containerd[1571]: 2026-04-17 23:58:18.780 [INFO][4208] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.119.64/26 host="172-232-15-112" Apr 17 23:58:18.850728 containerd[1571]: 2026-04-17 23:58:18.780 [INFO][4208] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.119.64/26 handle="k8s-pod-network.e0223b6ea5028add07969a01644dfb8f413b662e7d0356a14c76295a95bd5dad" host="172-232-15-112" Apr 17 23:58:18.850728 containerd[1571]: 2026-04-17 23:58:18.782 [INFO][4208] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e0223b6ea5028add07969a01644dfb8f413b662e7d0356a14c76295a95bd5dad Apr 17 23:58:18.850728 containerd[1571]: 2026-04-17 23:58:18.787 [INFO][4208] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.119.64/26 handle="k8s-pod-network.e0223b6ea5028add07969a01644dfb8f413b662e7d0356a14c76295a95bd5dad" host="172-232-15-112" Apr 17 23:58:18.850728 containerd[1571]: 2026-04-17 23:58:18.794 [INFO][4208] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.119.68/26] block=192.168.119.64/26 handle="k8s-pod-network.e0223b6ea5028add07969a01644dfb8f413b662e7d0356a14c76295a95bd5dad" host="172-232-15-112" Apr 17 23:58:18.850728 containerd[1571]: 2026-04-17 23:58:18.794 [INFO][4208] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.119.68/26] handle="k8s-pod-network.e0223b6ea5028add07969a01644dfb8f413b662e7d0356a14c76295a95bd5dad" host="172-232-15-112" Apr 17 23:58:18.850728 containerd[1571]: 2026-04-17 23:58:18.794 [INFO][4208] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:18.850728 containerd[1571]: 2026-04-17 23:58:18.794 [INFO][4208] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.119.68/26] IPv6=[] ContainerID="e0223b6ea5028add07969a01644dfb8f413b662e7d0356a14c76295a95bd5dad" HandleID="k8s-pod-network.e0223b6ea5028add07969a01644dfb8f413b662e7d0356a14c76295a95bd5dad" Workload="172--232--15--112-k8s-coredns--674b8bbfcf--rqhzd-eth0" Apr 17 23:58:18.853031 containerd[1571]: 2026-04-17 23:58:18.805 [INFO][4075] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e0223b6ea5028add07969a01644dfb8f413b662e7d0356a14c76295a95bd5dad" Namespace="kube-system" Pod="coredns-674b8bbfcf-rqhzd" WorkloadEndpoint="172--232--15--112-k8s-coredns--674b8bbfcf--rqhzd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-coredns--674b8bbfcf--rqhzd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4eb46d8d-9eca-422e-ba74-930bbc0b7688", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 57, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"", Pod:"coredns-674b8bbfcf-rqhzd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide41ec509a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:18.853031 containerd[1571]: 2026-04-17 23:58:18.805 [INFO][4075] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.68/32] ContainerID="e0223b6ea5028add07969a01644dfb8f413b662e7d0356a14c76295a95bd5dad" Namespace="kube-system" Pod="coredns-674b8bbfcf-rqhzd" WorkloadEndpoint="172--232--15--112-k8s-coredns--674b8bbfcf--rqhzd-eth0" Apr 17 23:58:18.853031 containerd[1571]: 2026-04-17 23:58:18.805 [INFO][4075] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide41ec509a9 ContainerID="e0223b6ea5028add07969a01644dfb8f413b662e7d0356a14c76295a95bd5dad" Namespace="kube-system" Pod="coredns-674b8bbfcf-rqhzd" WorkloadEndpoint="172--232--15--112-k8s-coredns--674b8bbfcf--rqhzd-eth0" Apr 17 23:58:18.853031 containerd[1571]: 2026-04-17 23:58:18.814 [INFO][4075] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e0223b6ea5028add07969a01644dfb8f413b662e7d0356a14c76295a95bd5dad" Namespace="kube-system" Pod="coredns-674b8bbfcf-rqhzd" WorkloadEndpoint="172--232--15--112-k8s-coredns--674b8bbfcf--rqhzd-eth0" Apr 17 23:58:18.853031 containerd[1571]: 2026-04-17 23:58:18.815 [INFO][4075] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e0223b6ea5028add07969a01644dfb8f413b662e7d0356a14c76295a95bd5dad" Namespace="kube-system" Pod="coredns-674b8bbfcf-rqhzd" WorkloadEndpoint="172--232--15--112-k8s-coredns--674b8bbfcf--rqhzd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-coredns--674b8bbfcf--rqhzd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4eb46d8d-9eca-422e-ba74-930bbc0b7688", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 57, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"e0223b6ea5028add07969a01644dfb8f413b662e7d0356a14c76295a95bd5dad", Pod:"coredns-674b8bbfcf-rqhzd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide41ec509a9", MAC:"3e:0f:88:49:73:15", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:18.853031 containerd[1571]: 2026-04-17 23:58:18.830 [INFO][4075] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e0223b6ea5028add07969a01644dfb8f413b662e7d0356a14c76295a95bd5dad" Namespace="kube-system" Pod="coredns-674b8bbfcf-rqhzd" WorkloadEndpoint="172--232--15--112-k8s-coredns--674b8bbfcf--rqhzd-eth0" Apr 17 23:58:18.925412 containerd[1571]: 2026-04-17 23:58:17.317 [ERROR][4040] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:58:18.925412 containerd[1571]: 2026-04-17 23:58:17.445 [INFO][4040] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--7mc8b-eth0 calico-apiserver-6fff8bdfbc- calico-system d9a00af0-47b3-4d37-9e23-a56a19b9db0e 938 0 2026-04-17 23:57:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fff8bdfbc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-232-15-112 calico-apiserver-6fff8bdfbc-7mc8b eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali1d27b58af55 [] [] }} ContainerID="310a3375670da0ced3c2181a4be7e742c1a3379530ac223ab665a7283f99ae70" Namespace="calico-system" Pod="calico-apiserver-6fff8bdfbc-7mc8b" WorkloadEndpoint="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--7mc8b-" Apr 17 23:58:18.925412 containerd[1571]: 2026-04-17 23:58:17.448 [INFO][4040] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="310a3375670da0ced3c2181a4be7e742c1a3379530ac223ab665a7283f99ae70" Namespace="calico-system" Pod="calico-apiserver-6fff8bdfbc-7mc8b" WorkloadEndpoint="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--7mc8b-eth0" Apr 17 23:58:18.925412 containerd[1571]: 2026-04-17 23:58:18.022 [INFO][4197] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="310a3375670da0ced3c2181a4be7e742c1a3379530ac223ab665a7283f99ae70" HandleID="k8s-pod-network.310a3375670da0ced3c2181a4be7e742c1a3379530ac223ab665a7283f99ae70" Workload="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--7mc8b-eth0" Apr 17 23:58:18.925412 containerd[1571]: 2026-04-17 23:58:18.117 [INFO][4197] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="310a3375670da0ced3c2181a4be7e742c1a3379530ac223ab665a7283f99ae70" HandleID="k8s-pod-network.310a3375670da0ced3c2181a4be7e742c1a3379530ac223ab665a7283f99ae70" Workload="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--7mc8b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000423b00), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-15-112", "pod":"calico-apiserver-6fff8bdfbc-7mc8b", "timestamp":"2026-04-17 23:58:18.022888602 +0000 UTC"}, Hostname:"172-232-15-112", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000f2580)} Apr 17 23:58:18.925412 containerd[1571]: 2026-04-17 23:58:18.119 [INFO][4197] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:18.925412 containerd[1571]: 2026-04-17 23:58:18.356 [INFO][4197] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:18.925412 containerd[1571]: 2026-04-17 23:58:18.362 [INFO][4197] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-15-112' Apr 17 23:58:18.925412 containerd[1571]: 2026-04-17 23:58:18.380 [INFO][4197] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.310a3375670da0ced3c2181a4be7e742c1a3379530ac223ab665a7283f99ae70" host="172-232-15-112" Apr 17 23:58:18.925412 containerd[1571]: 2026-04-17 23:58:18.396 [INFO][4197] ipam/ipam.go 409: Looking up existing affinities for host host="172-232-15-112" Apr 17 23:58:18.925412 containerd[1571]: 2026-04-17 23:58:18.412 [INFO][4197] ipam/ipam.go 526: Trying affinity for 192.168.119.64/26 host="172-232-15-112" Apr 17 23:58:18.925412 containerd[1571]: 2026-04-17 23:58:18.416 [INFO][4197] ipam/ipam.go 160: Attempting to load block cidr=192.168.119.64/26 host="172-232-15-112" Apr 17 23:58:18.925412 containerd[1571]: 2026-04-17 23:58:18.448 [INFO][4197] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.119.64/26 host="172-232-15-112" Apr 17 23:58:18.925412 containerd[1571]: 2026-04-17 23:58:18.448 [INFO][4197] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.119.64/26 handle="k8s-pod-network.310a3375670da0ced3c2181a4be7e742c1a3379530ac223ab665a7283f99ae70" host="172-232-15-112" Apr 17 23:58:18.925412 containerd[1571]: 2026-04-17 23:58:18.474 [INFO][4197] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.310a3375670da0ced3c2181a4be7e742c1a3379530ac223ab665a7283f99ae70 Apr 17 23:58:18.925412 containerd[1571]: 2026-04-17 23:58:18.531 [INFO][4197] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.119.64/26 handle="k8s-pod-network.310a3375670da0ced3c2181a4be7e742c1a3379530ac223ab665a7283f99ae70" host="172-232-15-112" Apr 17 23:58:18.925412 containerd[1571]: 2026-04-17 23:58:18.556 [INFO][4197] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.119.66/26] block=192.168.119.64/26 handle="k8s-pod-network.310a3375670da0ced3c2181a4be7e742c1a3379530ac223ab665a7283f99ae70" host="172-232-15-112" Apr 17 23:58:18.925412 containerd[1571]: 2026-04-17 23:58:18.556 [INFO][4197] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.119.66/26] handle="k8s-pod-network.310a3375670da0ced3c2181a4be7e742c1a3379530ac223ab665a7283f99ae70" host="172-232-15-112" Apr 17 23:58:18.925412 containerd[1571]: 2026-04-17 23:58:18.556 [INFO][4197] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:18.925412 containerd[1571]: 2026-04-17 23:58:18.556 [INFO][4197] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.119.66/26] IPv6=[] ContainerID="310a3375670da0ced3c2181a4be7e742c1a3379530ac223ab665a7283f99ae70" HandleID="k8s-pod-network.310a3375670da0ced3c2181a4be7e742c1a3379530ac223ab665a7283f99ae70" Workload="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--7mc8b-eth0" Apr 17 23:58:18.926880 containerd[1571]: 2026-04-17 23:58:18.599 [INFO][4040] cni-plugin/k8s.go 418: Populated endpoint ContainerID="310a3375670da0ced3c2181a4be7e742c1a3379530ac223ab665a7283f99ae70" Namespace="calico-system" Pod="calico-apiserver-6fff8bdfbc-7mc8b" WorkloadEndpoint="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--7mc8b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--7mc8b-eth0", GenerateName:"calico-apiserver-6fff8bdfbc-", Namespace:"calico-system", SelfLink:"", UID:"d9a00af0-47b3-4d37-9e23-a56a19b9db0e", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fff8bdfbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"", Pod:"calico-apiserver-6fff8bdfbc-7mc8b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1d27b58af55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:18.926880 containerd[1571]: 2026-04-17 23:58:18.599 [INFO][4040] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.66/32] ContainerID="310a3375670da0ced3c2181a4be7e742c1a3379530ac223ab665a7283f99ae70" Namespace="calico-system" Pod="calico-apiserver-6fff8bdfbc-7mc8b" WorkloadEndpoint="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--7mc8b-eth0" Apr 17 23:58:18.926880 containerd[1571]: 2026-04-17 23:58:18.599 [INFO][4040] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d27b58af55 ContainerID="310a3375670da0ced3c2181a4be7e742c1a3379530ac223ab665a7283f99ae70" Namespace="calico-system" Pod="calico-apiserver-6fff8bdfbc-7mc8b" WorkloadEndpoint="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--7mc8b-eth0" Apr 17 23:58:18.926880 containerd[1571]: 2026-04-17 23:58:18.655 [INFO][4040] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="310a3375670da0ced3c2181a4be7e742c1a3379530ac223ab665a7283f99ae70" Namespace="calico-system" Pod="calico-apiserver-6fff8bdfbc-7mc8b" WorkloadEndpoint="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--7mc8b-eth0" Apr 17 23:58:18.926880 containerd[1571]: 2026-04-17 23:58:18.664 [INFO][4040] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="310a3375670da0ced3c2181a4be7e742c1a3379530ac223ab665a7283f99ae70" Namespace="calico-system" Pod="calico-apiserver-6fff8bdfbc-7mc8b" WorkloadEndpoint="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--7mc8b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--7mc8b-eth0", GenerateName:"calico-apiserver-6fff8bdfbc-", Namespace:"calico-system", SelfLink:"", UID:"d9a00af0-47b3-4d37-9e23-a56a19b9db0e", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fff8bdfbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"310a3375670da0ced3c2181a4be7e742c1a3379530ac223ab665a7283f99ae70", Pod:"calico-apiserver-6fff8bdfbc-7mc8b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1d27b58af55", MAC:"5e:90:2b:3a:8b:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:18.926880 containerd[1571]: 2026-04-17 23:58:18.895 [INFO][4040] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="310a3375670da0ced3c2181a4be7e742c1a3379530ac223ab665a7283f99ae70" Namespace="calico-system" Pod="calico-apiserver-6fff8bdfbc-7mc8b" WorkloadEndpoint="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--7mc8b-eth0" Apr 17 23:58:19.013869 systemd-networkd[1236]: calic3ef05c0435: Link UP Apr 17 23:58:19.037832 systemd-networkd[1236]: calic3ef05c0435: Gained carrier Apr 17 23:58:19.084301 containerd[1571]: time="2026-04-17T23:58:19.062691227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:58:19.084301 containerd[1571]: time="2026-04-17T23:58:19.083697113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:58:19.084301 containerd[1571]: time="2026-04-17T23:58:19.083711693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:19.084301 containerd[1571]: time="2026-04-17T23:58:19.083844901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:19.092465 containerd[1571]: time="2026-04-17T23:58:19.090038635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:58:19.092465 containerd[1571]: time="2026-04-17T23:58:19.090103979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:58:19.092465 containerd[1571]: time="2026-04-17T23:58:19.091896995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:19.092465 containerd[1571]: time="2026-04-17T23:58:19.092109027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:19.118234 containerd[1571]: 2026-04-17 23:58:17.625 [ERROR][4123] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:58:19.118234 containerd[1571]: 2026-04-17 23:58:17.863 [INFO][4123] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--15--112-k8s-coredns--674b8bbfcf--88tx2-eth0 coredns-674b8bbfcf- kube-system 0c48190d-40ea-4a8e-95db-e61cbffe8eda 939 0 2026-04-17 23:57:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-232-15-112 coredns-674b8bbfcf-88tx2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic3ef05c0435 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1a97d9972bf6ffdaea518011fa81275a6e2f90172695e0bbeff2d2f9a39e7a54" Namespace="kube-system" Pod="coredns-674b8bbfcf-88tx2" WorkloadEndpoint="172--232--15--112-k8s-coredns--674b8bbfcf--88tx2-" Apr 17 23:58:19.118234 containerd[1571]: 2026-04-17 23:58:17.878 [INFO][4123] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a97d9972bf6ffdaea518011fa81275a6e2f90172695e0bbeff2d2f9a39e7a54" Namespace="kube-system" Pod="coredns-674b8bbfcf-88tx2" WorkloadEndpoint="172--232--15--112-k8s-coredns--674b8bbfcf--88tx2-eth0" Apr 17 23:58:19.118234 containerd[1571]: 2026-04-17 23:58:18.345 [INFO][4258] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a97d9972bf6ffdaea518011fa81275a6e2f90172695e0bbeff2d2f9a39e7a54" HandleID="k8s-pod-network.1a97d9972bf6ffdaea518011fa81275a6e2f90172695e0bbeff2d2f9a39e7a54" Workload="172--232--15--112-k8s-coredns--674b8bbfcf--88tx2-eth0" Apr 17 23:58:19.118234 containerd[1571]: 2026-04-17 23:58:18.385 [INFO][4258] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1a97d9972bf6ffdaea518011fa81275a6e2f90172695e0bbeff2d2f9a39e7a54" HandleID="k8s-pod-network.1a97d9972bf6ffdaea518011fa81275a6e2f90172695e0bbeff2d2f9a39e7a54" Workload="172--232--15--112-k8s-coredns--674b8bbfcf--88tx2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122170), Attrs:map[string]string{"namespace":"kube-system", "node":"172-232-15-112", "pod":"coredns-674b8bbfcf-88tx2", "timestamp":"2026-04-17 23:58:18.345984521 +0000 UTC"}, Hostname:"172-232-15-112", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000285080)} Apr 17 23:58:19.118234 containerd[1571]: 2026-04-17 23:58:18.385 [INFO][4258] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:19.118234 containerd[1571]: 2026-04-17 23:58:18.794 [INFO][4258] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:19.118234 containerd[1571]: 2026-04-17 23:58:18.794 [INFO][4258] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-15-112' Apr 17 23:58:19.118234 containerd[1571]: 2026-04-17 23:58:18.831 [INFO][4258] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1a97d9972bf6ffdaea518011fa81275a6e2f90172695e0bbeff2d2f9a39e7a54" host="172-232-15-112" Apr 17 23:58:19.118234 containerd[1571]: 2026-04-17 23:58:18.846 [INFO][4258] ipam/ipam.go 409: Looking up existing affinities for host host="172-232-15-112" Apr 17 23:58:19.118234 containerd[1571]: 2026-04-17 23:58:18.860 [INFO][4258] ipam/ipam.go 526: Trying affinity for 192.168.119.64/26 host="172-232-15-112" Apr 17 23:58:19.118234 containerd[1571]: 2026-04-17 23:58:18.865 [INFO][4258] ipam/ipam.go 160: Attempting to load block cidr=192.168.119.64/26 host="172-232-15-112" Apr 17 23:58:19.118234 containerd[1571]: 2026-04-17 23:58:18.867 [INFO][4258] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.119.64/26 host="172-232-15-112" Apr 17 23:58:19.118234 containerd[1571]: 2026-04-17 23:58:18.867 [INFO][4258] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.119.64/26 handle="k8s-pod-network.1a97d9972bf6ffdaea518011fa81275a6e2f90172695e0bbeff2d2f9a39e7a54" host="172-232-15-112" Apr 17 23:58:19.118234 containerd[1571]: 2026-04-17 23:58:18.869 [INFO][4258] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1a97d9972bf6ffdaea518011fa81275a6e2f90172695e0bbeff2d2f9a39e7a54 Apr 17 23:58:19.118234 containerd[1571]: 2026-04-17 23:58:18.879 [INFO][4258] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.119.64/26 handle="k8s-pod-network.1a97d9972bf6ffdaea518011fa81275a6e2f90172695e0bbeff2d2f9a39e7a54" host="172-232-15-112" Apr 17 23:58:19.118234 containerd[1571]: 2026-04-17 23:58:18.940 [INFO][4258] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.119.69/26] block=192.168.119.64/26 handle="k8s-pod-network.1a97d9972bf6ffdaea518011fa81275a6e2f90172695e0bbeff2d2f9a39e7a54" host="172-232-15-112" Apr 17 23:58:19.118234 containerd[1571]: 2026-04-17 23:58:18.940 [INFO][4258] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.119.69/26] handle="k8s-pod-network.1a97d9972bf6ffdaea518011fa81275a6e2f90172695e0bbeff2d2f9a39e7a54" host="172-232-15-112" Apr 17 23:58:19.118234 containerd[1571]: 2026-04-17 23:58:18.941 [INFO][4258] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:19.118234 containerd[1571]: 2026-04-17 23:58:18.941 [INFO][4258] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.119.69/26] IPv6=[] ContainerID="1a97d9972bf6ffdaea518011fa81275a6e2f90172695e0bbeff2d2f9a39e7a54" HandleID="k8s-pod-network.1a97d9972bf6ffdaea518011fa81275a6e2f90172695e0bbeff2d2f9a39e7a54" Workload="172--232--15--112-k8s-coredns--674b8bbfcf--88tx2-eth0" Apr 17 23:58:19.120374 containerd[1571]: 2026-04-17 23:58:18.971 [INFO][4123] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a97d9972bf6ffdaea518011fa81275a6e2f90172695e0bbeff2d2f9a39e7a54" Namespace="kube-system" Pod="coredns-674b8bbfcf-88tx2" WorkloadEndpoint="172--232--15--112-k8s-coredns--674b8bbfcf--88tx2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-coredns--674b8bbfcf--88tx2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0c48190d-40ea-4a8e-95db-e61cbffe8eda", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 57, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"", Pod:"coredns-674b8bbfcf-88tx2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3ef05c0435", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:19.120374 containerd[1571]: 2026-04-17 23:58:18.973 [INFO][4123] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.69/32] ContainerID="1a97d9972bf6ffdaea518011fa81275a6e2f90172695e0bbeff2d2f9a39e7a54" Namespace="kube-system" Pod="coredns-674b8bbfcf-88tx2" WorkloadEndpoint="172--232--15--112-k8s-coredns--674b8bbfcf--88tx2-eth0" Apr 17 23:58:19.120374 containerd[1571]: 2026-04-17 23:58:18.973 [INFO][4123] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic3ef05c0435 ContainerID="1a97d9972bf6ffdaea518011fa81275a6e2f90172695e0bbeff2d2f9a39e7a54" Namespace="kube-system" Pod="coredns-674b8bbfcf-88tx2" WorkloadEndpoint="172--232--15--112-k8s-coredns--674b8bbfcf--88tx2-eth0" Apr 17 23:58:19.120374 containerd[1571]: 2026-04-17 23:58:19.050 [INFO][4123] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a97d9972bf6ffdaea518011fa81275a6e2f90172695e0bbeff2d2f9a39e7a54" Namespace="kube-system" Pod="coredns-674b8bbfcf-88tx2" WorkloadEndpoint="172--232--15--112-k8s-coredns--674b8bbfcf--88tx2-eth0" Apr 17 23:58:19.120374 containerd[1571]: 2026-04-17 23:58:19.067 [INFO][4123] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a97d9972bf6ffdaea518011fa81275a6e2f90172695e0bbeff2d2f9a39e7a54" Namespace="kube-system" Pod="coredns-674b8bbfcf-88tx2" WorkloadEndpoint="172--232--15--112-k8s-coredns--674b8bbfcf--88tx2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-coredns--674b8bbfcf--88tx2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0c48190d-40ea-4a8e-95db-e61cbffe8eda", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 57, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"1a97d9972bf6ffdaea518011fa81275a6e2f90172695e0bbeff2d2f9a39e7a54", Pod:"coredns-674b8bbfcf-88tx2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3ef05c0435", MAC:"de:49:60:c8:5d:28", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:19.120374 containerd[1571]: 2026-04-17 23:58:19.101 [INFO][4123] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a97d9972bf6ffdaea518011fa81275a6e2f90172695e0bbeff2d2f9a39e7a54" Namespace="kube-system" Pod="coredns-674b8bbfcf-88tx2" WorkloadEndpoint="172--232--15--112-k8s-coredns--674b8bbfcf--88tx2-eth0" Apr 17 23:58:19.214501 systemd-networkd[1236]: cali7b9e7ab5f2a: Link UP Apr 17 23:58:19.215232 systemd-networkd[1236]: cali7b9e7ab5f2a: Gained carrier Apr 17 23:58:19.271525 containerd[1571]: time="2026-04-17T23:58:19.270580662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:58:19.271955 containerd[1571]: time="2026-04-17T23:58:19.271795323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:58:19.271955 containerd[1571]: time="2026-04-17T23:58:19.271890569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:19.273310 containerd[1571]: time="2026-04-17T23:58:19.273100070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:19.320002 systemd-networkd[1236]: cali02d38c04f0e: Link UP Apr 17 23:58:19.321206 systemd-networkd[1236]: cali02d38c04f0e: Gained carrier Apr 17 23:58:19.329152 kernel: calico-node[4094]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 17 23:58:19.367662 systemd[1]: run-containerd-runc-k8s.io-cdab7321252d17c2b83bbd2e5a869ee49924681cd3a5da45cedde3052125551b-runc.x17wZB.mount: Deactivated successfully. Apr 17 23:58:19.413389 containerd[1571]: 2026-04-17 23:58:17.778 [ERROR][4132] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:58:19.413389 containerd[1571]: 2026-04-17 23:58:17.944 [INFO][4132] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--15--112-k8s-calico--kube--controllers--5b464cddd7--wff8q-eth0 calico-kube-controllers-5b464cddd7- calico-system 83eecf55-6f3a-4d4f-964b-d260567b14a7 945 0 2026-04-17 23:57:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5b464cddd7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-232-15-112 calico-kube-controllers-5b464cddd7-wff8q eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7b9e7ab5f2a [] [] }} ContainerID="c8b3753e3c35c8af722a3b6463ffc111ece9d3260bc94d8bacf9feb5c0bf3662" Namespace="calico-system" Pod="calico-kube-controllers-5b464cddd7-wff8q" WorkloadEndpoint="172--232--15--112-k8s-calico--kube--controllers--5b464cddd7--wff8q-" Apr 17 23:58:19.413389 containerd[1571]: 2026-04-17 23:58:17.949 [INFO][4132] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c8b3753e3c35c8af722a3b6463ffc111ece9d3260bc94d8bacf9feb5c0bf3662" Namespace="calico-system" Pod="calico-kube-controllers-5b464cddd7-wff8q" WorkloadEndpoint="172--232--15--112-k8s-calico--kube--controllers--5b464cddd7--wff8q-eth0" Apr 17 23:58:19.413389 containerd[1571]: 2026-04-17 23:58:18.535 [INFO][4264] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8b3753e3c35c8af722a3b6463ffc111ece9d3260bc94d8bacf9feb5c0bf3662" HandleID="k8s-pod-network.c8b3753e3c35c8af722a3b6463ffc111ece9d3260bc94d8bacf9feb5c0bf3662" Workload="172--232--15--112-k8s-calico--kube--controllers--5b464cddd7--wff8q-eth0" Apr 17 23:58:19.413389 containerd[1571]: 2026-04-17 23:58:18.593 [INFO][4264] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c8b3753e3c35c8af722a3b6463ffc111ece9d3260bc94d8bacf9feb5c0bf3662" HandleID="k8s-pod-network.c8b3753e3c35c8af722a3b6463ffc111ece9d3260bc94d8bacf9feb5c0bf3662" Workload="172--232--15--112-k8s-calico--kube--controllers--5b464cddd7--wff8q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e7bd0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-15-112", "pod":"calico-kube-controllers-5b464cddd7-wff8q", "timestamp":"2026-04-17 23:58:18.535054798 +0000 UTC"}, Hostname:"172-232-15-112", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00036f080)} Apr 17 23:58:19.413389 containerd[1571]: 2026-04-17 23:58:18.593 [INFO][4264] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:19.413389 containerd[1571]: 2026-04-17 23:58:18.940 [INFO][4264] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:19.413389 containerd[1571]: 2026-04-17 23:58:18.941 [INFO][4264] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-15-112' Apr 17 23:58:19.413389 containerd[1571]: 2026-04-17 23:58:18.949 [INFO][4264] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c8b3753e3c35c8af722a3b6463ffc111ece9d3260bc94d8bacf9feb5c0bf3662" host="172-232-15-112" Apr 17 23:58:19.413389 containerd[1571]: 2026-04-17 23:58:18.958 [INFO][4264] ipam/ipam.go 409: Looking up existing affinities for host host="172-232-15-112" Apr 17 23:58:19.413389 containerd[1571]: 2026-04-17 23:58:18.970 [INFO][4264] ipam/ipam.go 526: Trying affinity for 192.168.119.64/26 host="172-232-15-112" Apr 17 23:58:19.413389 containerd[1571]: 2026-04-17 23:58:18.988 [INFO][4264] ipam/ipam.go 160: Attempting to load block cidr=192.168.119.64/26 host="172-232-15-112" Apr 17 23:58:19.413389 containerd[1571]: 2026-04-17 23:58:19.006 [INFO][4264] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.119.64/26 host="172-232-15-112" Apr 17 23:58:19.413389 containerd[1571]: 2026-04-17 23:58:19.006 [INFO][4264] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.119.64/26 handle="k8s-pod-network.c8b3753e3c35c8af722a3b6463ffc111ece9d3260bc94d8bacf9feb5c0bf3662" host="172-232-15-112" Apr 17 23:58:19.413389 containerd[1571]: 2026-04-17 23:58:19.021 [INFO][4264] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c8b3753e3c35c8af722a3b6463ffc111ece9d3260bc94d8bacf9feb5c0bf3662 Apr 17 23:58:19.413389 containerd[1571]: 2026-04-17 23:58:19.063 [INFO][4264] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.119.64/26 handle="k8s-pod-network.c8b3753e3c35c8af722a3b6463ffc111ece9d3260bc94d8bacf9feb5c0bf3662" host="172-232-15-112" Apr 17 23:58:19.413389 containerd[1571]: 2026-04-17 23:58:19.098 [INFO][4264] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.119.70/26] block=192.168.119.64/26 handle="k8s-pod-network.c8b3753e3c35c8af722a3b6463ffc111ece9d3260bc94d8bacf9feb5c0bf3662" host="172-232-15-112" Apr 17 23:58:19.413389 containerd[1571]: 2026-04-17 23:58:19.099 [INFO][4264] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.119.70/26] handle="k8s-pod-network.c8b3753e3c35c8af722a3b6463ffc111ece9d3260bc94d8bacf9feb5c0bf3662" host="172-232-15-112" Apr 17 23:58:19.413389 containerd[1571]: 2026-04-17 23:58:19.099 [INFO][4264] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:19.413389 containerd[1571]: 2026-04-17 23:58:19.099 [INFO][4264] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.119.70/26] IPv6=[] ContainerID="c8b3753e3c35c8af722a3b6463ffc111ece9d3260bc94d8bacf9feb5c0bf3662" HandleID="k8s-pod-network.c8b3753e3c35c8af722a3b6463ffc111ece9d3260bc94d8bacf9feb5c0bf3662" Workload="172--232--15--112-k8s-calico--kube--controllers--5b464cddd7--wff8q-eth0" Apr 17 23:58:19.418531 containerd[1571]: 2026-04-17 23:58:19.153 [INFO][4132] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c8b3753e3c35c8af722a3b6463ffc111ece9d3260bc94d8bacf9feb5c0bf3662" Namespace="calico-system" Pod="calico-kube-controllers-5b464cddd7-wff8q" WorkloadEndpoint="172--232--15--112-k8s-calico--kube--controllers--5b464cddd7--wff8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-calico--kube--controllers--5b464cddd7--wff8q-eth0", GenerateName:"calico-kube-controllers-5b464cddd7-", Namespace:"calico-system", SelfLink:"", UID:"83eecf55-6f3a-4d4f-964b-d260567b14a7", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 57, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b464cddd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"", Pod:"calico-kube-controllers-5b464cddd7-wff8q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.119.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7b9e7ab5f2a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:19.418531 containerd[1571]: 2026-04-17 23:58:19.154 [INFO][4132] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.70/32] ContainerID="c8b3753e3c35c8af722a3b6463ffc111ece9d3260bc94d8bacf9feb5c0bf3662" Namespace="calico-system" Pod="calico-kube-controllers-5b464cddd7-wff8q" WorkloadEndpoint="172--232--15--112-k8s-calico--kube--controllers--5b464cddd7--wff8q-eth0" Apr 17 23:58:19.418531 containerd[1571]: 2026-04-17 23:58:19.158 [INFO][4132] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b9e7ab5f2a ContainerID="c8b3753e3c35c8af722a3b6463ffc111ece9d3260bc94d8bacf9feb5c0bf3662" Namespace="calico-system" Pod="calico-kube-controllers-5b464cddd7-wff8q" WorkloadEndpoint="172--232--15--112-k8s-calico--kube--controllers--5b464cddd7--wff8q-eth0" Apr 17 23:58:19.418531 containerd[1571]: 2026-04-17 23:58:19.224 [INFO][4132] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8b3753e3c35c8af722a3b6463ffc111ece9d3260bc94d8bacf9feb5c0bf3662" Namespace="calico-system" Pod="calico-kube-controllers-5b464cddd7-wff8q" WorkloadEndpoint="172--232--15--112-k8s-calico--kube--controllers--5b464cddd7--wff8q-eth0" Apr 17 23:58:19.418531 containerd[1571]: 2026-04-17 23:58:19.226 [INFO][4132] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c8b3753e3c35c8af722a3b6463ffc111ece9d3260bc94d8bacf9feb5c0bf3662" Namespace="calico-system" Pod="calico-kube-controllers-5b464cddd7-wff8q" WorkloadEndpoint="172--232--15--112-k8s-calico--kube--controllers--5b464cddd7--wff8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-calico--kube--controllers--5b464cddd7--wff8q-eth0", GenerateName:"calico-kube-controllers-5b464cddd7-", Namespace:"calico-system", SelfLink:"", UID:"83eecf55-6f3a-4d4f-964b-d260567b14a7", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 57, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b464cddd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"c8b3753e3c35c8af722a3b6463ffc111ece9d3260bc94d8bacf9feb5c0bf3662", Pod:"calico-kube-controllers-5b464cddd7-wff8q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.119.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7b9e7ab5f2a", MAC:"c2:4a:d5:c2:30:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:19.418531 containerd[1571]: 2026-04-17 23:58:19.293 [INFO][4132] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c8b3753e3c35c8af722a3b6463ffc111ece9d3260bc94d8bacf9feb5c0bf3662" Namespace="calico-system" Pod="calico-kube-controllers-5b464cddd7-wff8q" WorkloadEndpoint="172--232--15--112-k8s-calico--kube--controllers--5b464cddd7--wff8q-eth0" Apr 17 23:58:19.531558 containerd[1571]: 2026-04-17 23:58:17.903 [ERROR][4124] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:58:19.531558 containerd[1571]: 2026-04-17 23:58:18.012 [INFO][4124] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--15--112-k8s-csi--node--driver--xt426-eth0 csi-node-driver- calico-system 8b300517-4fdc-4aae-b868-e2f538976f49 940 0 2026-04-17 23:57:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-232-15-112 csi-node-driver-xt426 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali02d38c04f0e [] [] }} ContainerID="e5c684aacd62dba46b591ec032eb7a7a2d141bd49d4cbae9abdb676c12c15bb6" Namespace="calico-system" Pod="csi-node-driver-xt426" WorkloadEndpoint="172--232--15--112-k8s-csi--node--driver--xt426-" Apr 17 23:58:19.531558 containerd[1571]: 2026-04-17 23:58:18.012 [INFO][4124] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e5c684aacd62dba46b591ec032eb7a7a2d141bd49d4cbae9abdb676c12c15bb6" Namespace="calico-system" Pod="csi-node-driver-xt426" WorkloadEndpoint="172--232--15--112-k8s-csi--node--driver--xt426-eth0" Apr 17 23:58:19.531558 containerd[1571]: 2026-04-17 23:58:18.588 [INFO][4275] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5c684aacd62dba46b591ec032eb7a7a2d141bd49d4cbae9abdb676c12c15bb6" HandleID="k8s-pod-network.e5c684aacd62dba46b591ec032eb7a7a2d141bd49d4cbae9abdb676c12c15bb6" Workload="172--232--15--112-k8s-csi--node--driver--xt426-eth0" Apr 17 23:58:19.531558 containerd[1571]: 2026-04-17 23:58:18.652 [INFO][4275] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e5c684aacd62dba46b591ec032eb7a7a2d141bd49d4cbae9abdb676c12c15bb6" HandleID="k8s-pod-network.e5c684aacd62dba46b591ec032eb7a7a2d141bd49d4cbae9abdb676c12c15bb6" Workload="172--232--15--112-k8s-csi--node--driver--xt426-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003716e0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-15-112", "pod":"csi-node-driver-xt426", "timestamp":"2026-04-17 23:58:18.588860751 +0000 UTC"}, Hostname:"172-232-15-112", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005aa840)} Apr 17 23:58:19.531558 containerd[1571]: 2026-04-17 23:58:18.652 [INFO][4275] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:19.531558 containerd[1571]: 2026-04-17 23:58:19.112 [INFO][4275] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:19.531558 containerd[1571]: 2026-04-17 23:58:19.112 [INFO][4275] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-15-112' Apr 17 23:58:19.531558 containerd[1571]: 2026-04-17 23:58:19.148 [INFO][4275] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e5c684aacd62dba46b591ec032eb7a7a2d141bd49d4cbae9abdb676c12c15bb6" host="172-232-15-112" Apr 17 23:58:19.531558 containerd[1571]: 2026-04-17 23:58:19.220 [INFO][4275] ipam/ipam.go 409: Looking up existing affinities for host host="172-232-15-112" Apr 17 23:58:19.531558 containerd[1571]: 2026-04-17 23:58:19.233 [INFO][4275] ipam/ipam.go 526: Trying affinity for 192.168.119.64/26 host="172-232-15-112" Apr 17 23:58:19.531558 containerd[1571]: 2026-04-17 23:58:19.237 [INFO][4275] ipam/ipam.go 160: Attempting to load block cidr=192.168.119.64/26 host="172-232-15-112" Apr 17 23:58:19.531558 containerd[1571]: 2026-04-17 23:58:19.241 [INFO][4275] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.119.64/26 host="172-232-15-112" Apr 17 23:58:19.531558 containerd[1571]: 2026-04-17 23:58:19.242 [INFO][4275] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.119.64/26 handle="k8s-pod-network.e5c684aacd62dba46b591ec032eb7a7a2d141bd49d4cbae9abdb676c12c15bb6" host="172-232-15-112" Apr 17 23:58:19.531558 containerd[1571]: 2026-04-17 23:58:19.245 [INFO][4275] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e5c684aacd62dba46b591ec032eb7a7a2d141bd49d4cbae9abdb676c12c15bb6 Apr 17 23:58:19.531558 containerd[1571]: 2026-04-17 23:58:19.256 [INFO][4275] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.119.64/26 handle="k8s-pod-network.e5c684aacd62dba46b591ec032eb7a7a2d141bd49d4cbae9abdb676c12c15bb6" host="172-232-15-112" Apr 17 23:58:19.531558 containerd[1571]: 2026-04-17 23:58:19.279 [INFO][4275] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.119.71/26] block=192.168.119.64/26 handle="k8s-pod-network.e5c684aacd62dba46b591ec032eb7a7a2d141bd49d4cbae9abdb676c12c15bb6" host="172-232-15-112" Apr 17 23:58:19.531558 containerd[1571]: 2026-04-17 23:58:19.279 [INFO][4275] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.119.71/26] handle="k8s-pod-network.e5c684aacd62dba46b591ec032eb7a7a2d141bd49d4cbae9abdb676c12c15bb6" host="172-232-15-112" Apr 17 23:58:19.531558 containerd[1571]: 2026-04-17 23:58:19.280 [INFO][4275] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:19.531558 containerd[1571]: 2026-04-17 23:58:19.280 [INFO][4275] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.119.71/26] IPv6=[] ContainerID="e5c684aacd62dba46b591ec032eb7a7a2d141bd49d4cbae9abdb676c12c15bb6" HandleID="k8s-pod-network.e5c684aacd62dba46b591ec032eb7a7a2d141bd49d4cbae9abdb676c12c15bb6" Workload="172--232--15--112-k8s-csi--node--driver--xt426-eth0" Apr 17 23:58:19.532580 containerd[1571]: 2026-04-17 23:58:19.293 [INFO][4124] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e5c684aacd62dba46b591ec032eb7a7a2d141bd49d4cbae9abdb676c12c15bb6" Namespace="calico-system" Pod="csi-node-driver-xt426" WorkloadEndpoint="172--232--15--112-k8s-csi--node--driver--xt426-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-csi--node--driver--xt426-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8b300517-4fdc-4aae-b868-e2f538976f49", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 57, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"", Pod:"csi-node-driver-xt426", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.119.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali02d38c04f0e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:19.532580 containerd[1571]: 2026-04-17 23:58:19.293 [INFO][4124] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.71/32] ContainerID="e5c684aacd62dba46b591ec032eb7a7a2d141bd49d4cbae9abdb676c12c15bb6" Namespace="calico-system" Pod="csi-node-driver-xt426" WorkloadEndpoint="172--232--15--112-k8s-csi--node--driver--xt426-eth0" Apr 17 23:58:19.532580 containerd[1571]: 2026-04-17 23:58:19.293 [INFO][4124] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali02d38c04f0e ContainerID="e5c684aacd62dba46b591ec032eb7a7a2d141bd49d4cbae9abdb676c12c15bb6" Namespace="calico-system" Pod="csi-node-driver-xt426" WorkloadEndpoint="172--232--15--112-k8s-csi--node--driver--xt426-eth0" Apr 17 23:58:19.532580 containerd[1571]: 2026-04-17 23:58:19.345 [INFO][4124] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5c684aacd62dba46b591ec032eb7a7a2d141bd49d4cbae9abdb676c12c15bb6" Namespace="calico-system" Pod="csi-node-driver-xt426" WorkloadEndpoint="172--232--15--112-k8s-csi--node--driver--xt426-eth0" Apr 17 23:58:19.532580 containerd[1571]: 2026-04-17 23:58:19.392 [INFO][4124] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e5c684aacd62dba46b591ec032eb7a7a2d141bd49d4cbae9abdb676c12c15bb6" Namespace="calico-system" Pod="csi-node-driver-xt426" WorkloadEndpoint="172--232--15--112-k8s-csi--node--driver--xt426-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-csi--node--driver--xt426-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8b300517-4fdc-4aae-b868-e2f538976f49", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 57, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"e5c684aacd62dba46b591ec032eb7a7a2d141bd49d4cbae9abdb676c12c15bb6", Pod:"csi-node-driver-xt426", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.119.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali02d38c04f0e", MAC:"06:d5:70:2e:b4:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:19.532580 containerd[1571]: 2026-04-17 23:58:19.484 [INFO][4124] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e5c684aacd62dba46b591ec032eb7a7a2d141bd49d4cbae9abdb676c12c15bb6" Namespace="calico-system" Pod="csi-node-driver-xt426" WorkloadEndpoint="172--232--15--112-k8s-csi--node--driver--xt426-eth0" Apr 17 23:58:19.558577 kubelet[2741]: I0417 23:58:19.558142 2741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c8d0225-44a8-413f-8e5f-1d6fb1b28b65" path="/var/lib/kubelet/pods/3c8d0225-44a8-413f-8e5f-1d6fb1b28b65/volumes" Apr 17 23:58:19.578233 systemd-networkd[1236]: cali15822b9f7da: Link UP Apr 17 23:58:19.593292 systemd-networkd[1236]: cali15822b9f7da: Gained carrier Apr 17 23:58:19.739029 containerd[1571]: time="2026-04-17T23:58:19.736416153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:58:19.739029 containerd[1571]: time="2026-04-17T23:58:19.736553442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:58:19.739029 containerd[1571]: time="2026-04-17T23:58:19.736569843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:19.739029 containerd[1571]: time="2026-04-17T23:58:19.736699510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:19.792795 containerd[1571]: time="2026-04-17T23:58:19.790677404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fff8bdfbc-7mc8b,Uid:d9a00af0-47b3-4d37-9e23-a56a19b9db0e,Namespace:calico-system,Attempt:1,} returns sandbox id \"310a3375670da0ced3c2181a4be7e742c1a3379530ac223ab665a7283f99ae70\"" Apr 17 23:58:19.805352 systemd-journald[1166]: Under memory pressure, flushing caches. Apr 17 23:58:19.797248 systemd-resolved[1476]: Under memory pressure, flushing caches. Apr 17 23:58:19.797353 systemd-resolved[1476]: Flushed all caches. Apr 17 23:58:19.819037 containerd[1571]: time="2026-04-17T23:58:19.818319340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:58:19.852975 containerd[1571]: time="2026-04-17T23:58:19.852936355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-l2kf2,Uid:e8c5738b-2e74-4bd4-9c11-4f37db2195d6,Namespace:calico-system,Attempt:1,} returns sandbox id \"abd001759dac0d2b45d96c156e36424f5ee4f5570966c6d79ce225fd8923bd04\"" Apr 17 23:58:19.944565 containerd[1571]: 2026-04-17 23:58:18.602 [ERROR][4281] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:58:19.944565 containerd[1571]: 2026-04-17 23:58:18.648 [INFO][4281] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--15--112-k8s-whisker--59bdfdf7b8--ksvgv-eth0 whisker-59bdfdf7b8- calico-system 001bd93c-0762-4ea4-a9b6-cb368ce16ca6 963 0 2026-04-17 23:58:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:59bdfdf7b8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-232-15-112 whisker-59bdfdf7b8-ksvgv eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali15822b9f7da [] [] }} ContainerID="f1d0a5276d36ead95adddbeba906bbfde58fd4eeb007f06c5f67110221cc9a88" Namespace="calico-system" Pod="whisker-59bdfdf7b8-ksvgv" WorkloadEndpoint="172--232--15--112-k8s-whisker--59bdfdf7b8--ksvgv-" Apr 17 23:58:19.944565 containerd[1571]: 2026-04-17 23:58:18.648 [INFO][4281] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f1d0a5276d36ead95adddbeba906bbfde58fd4eeb007f06c5f67110221cc9a88" Namespace="calico-system" Pod="whisker-59bdfdf7b8-ksvgv" WorkloadEndpoint="172--232--15--112-k8s-whisker--59bdfdf7b8--ksvgv-eth0" Apr 17 23:58:19.944565 containerd[1571]: 2026-04-17 23:58:19.079 [INFO][4319] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f1d0a5276d36ead95adddbeba906bbfde58fd4eeb007f06c5f67110221cc9a88" HandleID="k8s-pod-network.f1d0a5276d36ead95adddbeba906bbfde58fd4eeb007f06c5f67110221cc9a88" Workload="172--232--15--112-k8s-whisker--59bdfdf7b8--ksvgv-eth0" Apr 17 23:58:19.944565 containerd[1571]: 2026-04-17 23:58:19.219 [INFO][4319] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f1d0a5276d36ead95adddbeba906bbfde58fd4eeb007f06c5f67110221cc9a88" HandleID="k8s-pod-network.f1d0a5276d36ead95adddbeba906bbfde58fd4eeb007f06c5f67110221cc9a88" Workload="172--232--15--112-k8s-whisker--59bdfdf7b8--ksvgv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d7380), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-15-112", "pod":"whisker-59bdfdf7b8-ksvgv", "timestamp":"2026-04-17 23:58:19.079071531 +0000 UTC"}, Hostname:"172-232-15-112", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000113340)} Apr 17 23:58:19.944565 containerd[1571]: 2026-04-17 23:58:19.220 [INFO][4319] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:19.944565 containerd[1571]: 2026-04-17 23:58:19.281 [INFO][4319] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:19.944565 containerd[1571]: 2026-04-17 23:58:19.281 [INFO][4319] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-15-112' Apr 17 23:58:19.944565 containerd[1571]: 2026-04-17 23:58:19.287 [INFO][4319] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f1d0a5276d36ead95adddbeba906bbfde58fd4eeb007f06c5f67110221cc9a88" host="172-232-15-112" Apr 17 23:58:19.944565 containerd[1571]: 2026-04-17 23:58:19.314 [INFO][4319] ipam/ipam.go 409: Looking up existing affinities for host host="172-232-15-112" Apr 17 23:58:19.944565 containerd[1571]: 2026-04-17 23:58:19.429 [INFO][4319] ipam/ipam.go 526: Trying affinity for 192.168.119.64/26 host="172-232-15-112" Apr 17 23:58:19.944565 containerd[1571]: 2026-04-17 23:58:19.446 [INFO][4319] ipam/ipam.go 160: Attempting to load block cidr=192.168.119.64/26 host="172-232-15-112" Apr 17 23:58:19.944565 containerd[1571]: 2026-04-17 23:58:19.461 [INFO][4319] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.119.64/26 host="172-232-15-112" Apr 17 23:58:19.944565 containerd[1571]: 2026-04-17 23:58:19.461 [INFO][4319] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.119.64/26 handle="k8s-pod-network.f1d0a5276d36ead95adddbeba906bbfde58fd4eeb007f06c5f67110221cc9a88" host="172-232-15-112" Apr 17 23:58:19.944565 containerd[1571]: 2026-04-17 23:58:19.483 [INFO][4319] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f1d0a5276d36ead95adddbeba906bbfde58fd4eeb007f06c5f67110221cc9a88 Apr 17 23:58:19.944565 containerd[1571]: 2026-04-17 23:58:19.491 [INFO][4319] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.119.64/26 handle="k8s-pod-network.f1d0a5276d36ead95adddbeba906bbfde58fd4eeb007f06c5f67110221cc9a88" host="172-232-15-112" Apr 17 23:58:19.944565 containerd[1571]: 2026-04-17 23:58:19.504 [INFO][4319] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.119.72/26] block=192.168.119.64/26 handle="k8s-pod-network.f1d0a5276d36ead95adddbeba906bbfde58fd4eeb007f06c5f67110221cc9a88" host="172-232-15-112" Apr 17 23:58:19.944565 containerd[1571]: 2026-04-17 23:58:19.504 [INFO][4319] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.119.72/26] handle="k8s-pod-network.f1d0a5276d36ead95adddbeba906bbfde58fd4eeb007f06c5f67110221cc9a88" host="172-232-15-112" Apr 17 23:58:19.944565 containerd[1571]: 2026-04-17 23:58:19.505 [INFO][4319] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:19.944565 containerd[1571]: 2026-04-17 23:58:19.505 [INFO][4319] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.119.72/26] IPv6=[] ContainerID="f1d0a5276d36ead95adddbeba906bbfde58fd4eeb007f06c5f67110221cc9a88" HandleID="k8s-pod-network.f1d0a5276d36ead95adddbeba906bbfde58fd4eeb007f06c5f67110221cc9a88" Workload="172--232--15--112-k8s-whisker--59bdfdf7b8--ksvgv-eth0" Apr 17 23:58:19.945405 containerd[1571]: 2026-04-17 23:58:19.562 [INFO][4281] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f1d0a5276d36ead95adddbeba906bbfde58fd4eeb007f06c5f67110221cc9a88" Namespace="calico-system" Pod="whisker-59bdfdf7b8-ksvgv" WorkloadEndpoint="172--232--15--112-k8s-whisker--59bdfdf7b8--ksvgv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-whisker--59bdfdf7b8--ksvgv-eth0", GenerateName:"whisker-59bdfdf7b8-", Namespace:"calico-system", SelfLink:"", UID:"001bd93c-0762-4ea4-a9b6-cb368ce16ca6", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 58, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59bdfdf7b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"", Pod:"whisker-59bdfdf7b8-ksvgv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.119.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali15822b9f7da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:19.945405 containerd[1571]: 2026-04-17 23:58:19.563 [INFO][4281] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.72/32] ContainerID="f1d0a5276d36ead95adddbeba906bbfde58fd4eeb007f06c5f67110221cc9a88" Namespace="calico-system" Pod="whisker-59bdfdf7b8-ksvgv" WorkloadEndpoint="172--232--15--112-k8s-whisker--59bdfdf7b8--ksvgv-eth0" Apr 17 23:58:19.945405 containerd[1571]: 2026-04-17 23:58:19.563 [INFO][4281] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali15822b9f7da ContainerID="f1d0a5276d36ead95adddbeba906bbfde58fd4eeb007f06c5f67110221cc9a88" Namespace="calico-system" Pod="whisker-59bdfdf7b8-ksvgv" WorkloadEndpoint="172--232--15--112-k8s-whisker--59bdfdf7b8--ksvgv-eth0" Apr 17 23:58:19.945405 containerd[1571]: 2026-04-17 23:58:19.613 [INFO][4281] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f1d0a5276d36ead95adddbeba906bbfde58fd4eeb007f06c5f67110221cc9a88" Namespace="calico-system" Pod="whisker-59bdfdf7b8-ksvgv" WorkloadEndpoint="172--232--15--112-k8s-whisker--59bdfdf7b8--ksvgv-eth0" Apr 17 23:58:19.945405 containerd[1571]: 2026-04-17 23:58:19.715 [INFO][4281] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f1d0a5276d36ead95adddbeba906bbfde58fd4eeb007f06c5f67110221cc9a88" Namespace="calico-system" Pod="whisker-59bdfdf7b8-ksvgv" WorkloadEndpoint="172--232--15--112-k8s-whisker--59bdfdf7b8--ksvgv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-whisker--59bdfdf7b8--ksvgv-eth0", GenerateName:"whisker-59bdfdf7b8-", Namespace:"calico-system", SelfLink:"", UID:"001bd93c-0762-4ea4-a9b6-cb368ce16ca6", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 58, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59bdfdf7b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"f1d0a5276d36ead95adddbeba906bbfde58fd4eeb007f06c5f67110221cc9a88", Pod:"whisker-59bdfdf7b8-ksvgv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.119.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali15822b9f7da", MAC:"ce:a9:19:49:ad:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:19.945405 containerd[1571]: 2026-04-17 23:58:19.831 [INFO][4281] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f1d0a5276d36ead95adddbeba906bbfde58fd4eeb007f06c5f67110221cc9a88" Namespace="calico-system" Pod="whisker-59bdfdf7b8-ksvgv" WorkloadEndpoint="172--232--15--112-k8s-whisker--59bdfdf7b8--ksvgv-eth0" Apr 17 23:58:19.959187 containerd[1571]: time="2026-04-17T23:58:19.958013154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rqhzd,Uid:4eb46d8d-9eca-422e-ba74-930bbc0b7688,Namespace:kube-system,Attempt:1,} returns sandbox id \"e0223b6ea5028add07969a01644dfb8f413b662e7d0356a14c76295a95bd5dad\"" Apr 17 23:58:19.960587 kubelet[2741]: E0417 23:58:19.960560 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:58:19.974923 containerd[1571]: time="2026-04-17T23:58:19.974309262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:58:19.974923 containerd[1571]: time="2026-04-17T23:58:19.974412588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:58:19.974923 containerd[1571]: time="2026-04-17T23:58:19.974430469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:19.982154 containerd[1571]: time="2026-04-17T23:58:19.979438434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:19.988671 containerd[1571]: time="2026-04-17T23:58:19.988629824Z" level=info msg="CreateContainer within sandbox \"e0223b6ea5028add07969a01644dfb8f413b662e7d0356a14c76295a95bd5dad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:58:19.996934 containerd[1571]: time="2026-04-17T23:58:19.996910891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fff8bdfbc-mvj4v,Uid:47346385-5f17-4c08-b4a3-a3958d4f414b,Namespace:calico-system,Attempt:1,} returns sandbox id \"cdab7321252d17c2b83bbd2e5a869ee49924681cd3a5da45cedde3052125551b\"" Apr 17 23:58:20.006261 containerd[1571]: time="2026-04-17T23:58:20.004188912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:58:20.006261 containerd[1571]: time="2026-04-17T23:58:20.005169199Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:58:20.006261 containerd[1571]: time="2026-04-17T23:58:20.005193500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:20.006261 containerd[1571]: time="2026-04-17T23:58:20.006098932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:20.035387 containerd[1571]: time="2026-04-17T23:58:20.035338315Z" level=info msg="CreateContainer within sandbox \"e0223b6ea5028add07969a01644dfb8f413b662e7d0356a14c76295a95bd5dad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f017e7b341efcb6b77084a49ac843a8e09e3561592c5ab3845736dd5a1d3d80e\"" Apr 17 23:58:20.040161 containerd[1571]: time="2026-04-17T23:58:20.039888135Z" level=info msg="StartContainer for \"f017e7b341efcb6b77084a49ac843a8e09e3561592c5ab3845736dd5a1d3d80e\"" Apr 17 23:58:20.053392 systemd-networkd[1236]: cali1d27b58af55: Gained IPv6LL Apr 17 23:58:20.132401 containerd[1571]: time="2026-04-17T23:58:20.130728132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:58:20.132401 containerd[1571]: time="2026-04-17T23:58:20.130780695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:58:20.132401 containerd[1571]: time="2026-04-17T23:58:20.130797376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:20.132401 containerd[1571]: time="2026-04-17T23:58:20.130899322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:20.183294 systemd-networkd[1236]: calide41ec509a9: Gained IPv6LL Apr 17 23:58:20.227245 containerd[1571]: time="2026-04-17T23:58:20.226998020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-88tx2,Uid:0c48190d-40ea-4a8e-95db-e61cbffe8eda,Namespace:kube-system,Attempt:1,} returns sandbox id \"1a97d9972bf6ffdaea518011fa81275a6e2f90172695e0bbeff2d2f9a39e7a54\"" Apr 17 23:58:20.231574 kubelet[2741]: E0417 23:58:20.230017 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:58:20.249524 containerd[1571]: time="2026-04-17T23:58:20.249490247Z" level=info msg="CreateContainer within sandbox \"1a97d9972bf6ffdaea518011fa81275a6e2f90172695e0bbeff2d2f9a39e7a54\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:58:20.293806 containerd[1571]: time="2026-04-17T23:58:20.293770090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xt426,Uid:8b300517-4fdc-4aae-b868-e2f538976f49,Namespace:calico-system,Attempt:1,} returns sandbox id \"e5c684aacd62dba46b591ec032eb7a7a2d141bd49d4cbae9abdb676c12c15bb6\"" Apr 17 23:58:20.314724 containerd[1571]: time="2026-04-17T23:58:20.314151126Z" level=info msg="CreateContainer within sandbox \"1a97d9972bf6ffdaea518011fa81275a6e2f90172695e0bbeff2d2f9a39e7a54\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6aa485acc730f8baba8558c9ee62016ffc6eed86be678e6e0405f5ad3a21caf0\"" Apr 17 23:58:20.317578 containerd[1571]: time="2026-04-17T23:58:20.316378664Z" level=info msg="StartContainer for \"6aa485acc730f8baba8558c9ee62016ffc6eed86be678e6e0405f5ad3a21caf0\"" Apr 17 23:58:20.400629 containerd[1571]: time="2026-04-17T23:58:20.398949978Z" level=info msg="StartContainer for \"f017e7b341efcb6b77084a49ac843a8e09e3561592c5ab3845736dd5a1d3d80e\" returns successfully" Apr 17 23:58:20.437412 systemd-networkd[1236]: cali119c37c994f: Gained IPv6LL Apr 17 23:58:20.501611 systemd-networkd[1236]: calic3ef05c0435: Gained IPv6LL Apr 17 23:58:20.526079 containerd[1571]: time="2026-04-17T23:58:20.525849538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59bdfdf7b8-ksvgv,Uid:001bd93c-0762-4ea4-a9b6-cb368ce16ca6,Namespace:calico-system,Attempt:0,} returns sandbox id \"f1d0a5276d36ead95adddbeba906bbfde58fd4eeb007f06c5f67110221cc9a88\"" Apr 17 23:58:20.571306 systemd-networkd[1236]: cali825d1771b61: Gained IPv6LL Apr 17 23:58:20.650251 containerd[1571]: time="2026-04-17T23:58:20.649459350Z" level=info msg="StartContainer for \"6aa485acc730f8baba8558c9ee62016ffc6eed86be678e6e0405f5ad3a21caf0\" returns successfully" Apr 17 23:58:20.654449 containerd[1571]: time="2026-04-17T23:58:20.654384142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b464cddd7-wff8q,Uid:83eecf55-6f3a-4d4f-964b-d260567b14a7,Namespace:calico-system,Attempt:1,} returns sandbox id \"c8b3753e3c35c8af722a3b6463ffc111ece9d3260bc94d8bacf9feb5c0bf3662\"" Apr 17 23:58:20.729109 kubelet[2741]: E0417 23:58:20.728874 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:58:20.753576 kubelet[2741]: E0417 23:58:20.752909 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:58:20.900349 kubelet[2741]: I0417 23:58:20.899902 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rqhzd" podStartSLOduration=51.899351957 podStartE2EDuration="51.899351957s" podCreationTimestamp="2026-04-17 23:57:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:58:20.89346858 +0000 UTC m=+57.539717153" watchObservedRunningTime="2026-04-17 23:58:20.899351957 +0000 UTC m=+57.545600500" Apr 17 23:58:20.902593 kubelet[2741]: I0417 23:58:20.901673 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-88tx2" podStartSLOduration=51.901660949000004 podStartE2EDuration="51.901660949s" podCreationTimestamp="2026-04-17 23:57:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:58:20.813357867 +0000 UTC m=+57.459606410" watchObservedRunningTime="2026-04-17 23:58:20.901660949 +0000 UTC m=+57.547909492" Apr 17 23:58:21.031036 systemd-networkd[1236]: cali7b9e7ab5f2a: Gained IPv6LL Apr 17 23:58:21.039705 systemd-networkd[1236]: cali15822b9f7da: Gained IPv6LL Apr 17 23:58:21.300065 systemd-networkd[1236]: vxlan.calico: Link UP Apr 17 23:58:21.300079 systemd-networkd[1236]: vxlan.calico: Gained carrier Apr 17 23:58:21.341895 systemd-networkd[1236]: cali02d38c04f0e: Gained IPv6LL Apr 17 23:58:21.804153 kubelet[2741]: E0417 23:58:21.804094 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:58:21.819591 kubelet[2741]: E0417 23:58:21.805827 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:58:21.857311 systemd-journald[1166]: Under memory pressure, flushing caches. Apr 17 23:58:21.846205 systemd-resolved[1476]: Under memory pressure, flushing caches. Apr 17 23:58:21.846253 systemd-resolved[1476]: Flushed all caches. Apr 17 23:58:22.379042 containerd[1571]: time="2026-04-17T23:58:22.378971505Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:22.380440 containerd[1571]: time="2026-04-17T23:58:22.380389182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 17 23:58:22.380740 containerd[1571]: time="2026-04-17T23:58:22.380707680Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:22.385294 containerd[1571]: time="2026-04-17T23:58:22.385229115Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:22.386016 containerd[1571]: time="2026-04-17T23:58:22.385975835Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 2.56756184s" Apr 17 23:58:22.386079 containerd[1571]: time="2026-04-17T23:58:22.386020427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:58:22.392718 containerd[1571]: time="2026-04-17T23:58:22.392668798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 17 23:58:22.411910 containerd[1571]: time="2026-04-17T23:58:22.411872068Z" level=info msg="CreateContainer within sandbox \"310a3375670da0ced3c2181a4be7e742c1a3379530ac223ab665a7283f99ae70\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:58:22.422641 containerd[1571]: time="2026-04-17T23:58:22.422612110Z" level=info msg="CreateContainer within sandbox \"310a3375670da0ced3c2181a4be7e742c1a3379530ac223ab665a7283f99ae70\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"463304024e01758b580016758617f1e8b5663eddc41e65ce36cb4f8efa952a1c\"" Apr 17 23:58:22.426880 containerd[1571]: time="2026-04-17T23:58:22.426854160Z" level=info msg="StartContainer for \"463304024e01758b580016758617f1e8b5663eddc41e65ce36cb4f8efa952a1c\"" Apr 17 23:58:22.471305 systemd[1]: run-containerd-runc-k8s.io-463304024e01758b580016758617f1e8b5663eddc41e65ce36cb4f8efa952a1c-runc.aB5m37.mount: Deactivated successfully. Apr 17 23:58:22.523871 containerd[1571]: time="2026-04-17T23:58:22.523807743Z" level=info msg="StartContainer for \"463304024e01758b580016758617f1e8b5663eddc41e65ce36cb4f8efa952a1c\" returns successfully" Apr 17 23:58:22.618110 systemd-networkd[1236]: vxlan.calico: Gained IPv6LL Apr 17 23:58:22.816691 kubelet[2741]: E0417 23:58:22.815944 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:58:22.817662 kubelet[2741]: E0417 23:58:22.816945 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:58:23.504742 containerd[1571]: time="2026-04-17T23:58:23.504706611Z" level=info msg="StopPodSandbox for \"b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90\"" Apr 17 23:58:23.693471 containerd[1571]: 2026-04-17 23:58:23.631 [WARNING][4993] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-goldmane--5b85766d88--l2kf2-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"e8c5738b-2e74-4bd4-9c11-4f37db2195d6", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"abd001759dac0d2b45d96c156e36424f5ee4f5570966c6d79ce225fd8923bd04", Pod:"goldmane-5b85766d88-l2kf2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.119.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali119c37c994f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:23.693471 containerd[1571]: 2026-04-17 23:58:23.631 [INFO][4993] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" Apr 17 23:58:23.693471 containerd[1571]: 2026-04-17 23:58:23.631 [INFO][4993] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" iface="eth0" netns="" Apr 17 23:58:23.693471 containerd[1571]: 2026-04-17 23:58:23.631 [INFO][4993] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" Apr 17 23:58:23.693471 containerd[1571]: 2026-04-17 23:58:23.632 [INFO][4993] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" Apr 17 23:58:23.693471 containerd[1571]: 2026-04-17 23:58:23.670 [INFO][5003] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" HandleID="k8s-pod-network.b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" Workload="172--232--15--112-k8s-goldmane--5b85766d88--l2kf2-eth0" Apr 17 23:58:23.693471 containerd[1571]: 2026-04-17 23:58:23.670 [INFO][5003] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:23.693471 containerd[1571]: 2026-04-17 23:58:23.670 [INFO][5003] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:23.693471 containerd[1571]: 2026-04-17 23:58:23.679 [WARNING][5003] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" HandleID="k8s-pod-network.b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" Workload="172--232--15--112-k8s-goldmane--5b85766d88--l2kf2-eth0" Apr 17 23:58:23.693471 containerd[1571]: 2026-04-17 23:58:23.679 [INFO][5003] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" HandleID="k8s-pod-network.b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" Workload="172--232--15--112-k8s-goldmane--5b85766d88--l2kf2-eth0" Apr 17 23:58:23.693471 containerd[1571]: 2026-04-17 23:58:23.682 [INFO][5003] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:23.693471 containerd[1571]: 2026-04-17 23:58:23.685 [INFO][4993] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" Apr 17 23:58:23.693471 containerd[1571]: time="2026-04-17T23:58:23.689077254Z" level=info msg="TearDown network for sandbox \"b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90\" successfully" Apr 17 23:58:23.693471 containerd[1571]: time="2026-04-17T23:58:23.689106396Z" level=info msg="StopPodSandbox for \"b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90\" returns successfully" Apr 17 23:58:23.694072 containerd[1571]: time="2026-04-17T23:58:23.693777252Z" level=info msg="RemovePodSandbox for \"b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90\"" Apr 17 23:58:23.694072 containerd[1571]: time="2026-04-17T23:58:23.693812174Z" level=info msg="Forcibly stopping sandbox \"b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90\"" Apr 17 23:58:23.821900 kubelet[2741]: I0417 23:58:23.821532 2741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:58:23.824780 kubelet[2741]: E0417 23:58:23.823276 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:58:23.824780 kubelet[2741]: E0417 23:58:23.824731 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:58:23.833588 containerd[1571]: 2026-04-17 23:58:23.770 [WARNING][5018] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-goldmane--5b85766d88--l2kf2-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"e8c5738b-2e74-4bd4-9c11-4f37db2195d6", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"abd001759dac0d2b45d96c156e36424f5ee4f5570966c6d79ce225fd8923bd04", Pod:"goldmane-5b85766d88-l2kf2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.119.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali119c37c994f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:23.833588 containerd[1571]: 2026-04-17 23:58:23.771 [INFO][5018] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" Apr 17 23:58:23.833588 containerd[1571]: 2026-04-17 23:58:23.771 [INFO][5018] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" iface="eth0" netns="" Apr 17 23:58:23.833588 containerd[1571]: 2026-04-17 23:58:23.771 [INFO][5018] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" Apr 17 23:58:23.833588 containerd[1571]: 2026-04-17 23:58:23.771 [INFO][5018] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" Apr 17 23:58:23.833588 containerd[1571]: 2026-04-17 23:58:23.804 [INFO][5026] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" HandleID="k8s-pod-network.b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" Workload="172--232--15--112-k8s-goldmane--5b85766d88--l2kf2-eth0" Apr 17 23:58:23.833588 containerd[1571]: 2026-04-17 23:58:23.804 [INFO][5026] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:23.833588 containerd[1571]: 2026-04-17 23:58:23.805 [INFO][5026] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:23.833588 containerd[1571]: 2026-04-17 23:58:23.812 [WARNING][5026] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" HandleID="k8s-pod-network.b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" Workload="172--232--15--112-k8s-goldmane--5b85766d88--l2kf2-eth0" Apr 17 23:58:23.833588 containerd[1571]: 2026-04-17 23:58:23.812 [INFO][5026] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" HandleID="k8s-pod-network.b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" Workload="172--232--15--112-k8s-goldmane--5b85766d88--l2kf2-eth0" Apr 17 23:58:23.833588 containerd[1571]: 2026-04-17 23:58:23.815 [INFO][5026] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:23.833588 containerd[1571]: 2026-04-17 23:58:23.825 [INFO][5018] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90" Apr 17 23:58:23.834068 containerd[1571]: time="2026-04-17T23:58:23.833638148Z" level=info msg="TearDown network for sandbox \"b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90\" successfully" Apr 17 23:58:23.847243 containerd[1571]: time="2026-04-17T23:58:23.846230412Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:58:23.847243 containerd[1571]: time="2026-04-17T23:58:23.846564190Z" level=info msg="RemovePodSandbox \"b00ca37985ead571ef0e145590bc24c01cd6057d4212b4854962321b0f8b0a90\" returns successfully" Apr 17 23:58:23.848835 containerd[1571]: time="2026-04-17T23:58:23.848793107Z" level=info msg="StopPodSandbox for \"6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79\"" Apr 17 23:58:24.006736 containerd[1571]: 2026-04-17 23:58:23.921 [WARNING][5041] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--7mc8b-eth0", GenerateName:"calico-apiserver-6fff8bdfbc-", Namespace:"calico-system", SelfLink:"", UID:"d9a00af0-47b3-4d37-9e23-a56a19b9db0e", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fff8bdfbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"310a3375670da0ced3c2181a4be7e742c1a3379530ac223ab665a7283f99ae70", Pod:"calico-apiserver-6fff8bdfbc-7mc8b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1d27b58af55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:24.006736 containerd[1571]: 2026-04-17 23:58:23.922 [INFO][5041] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" Apr 17 23:58:24.006736 containerd[1571]: 2026-04-17 23:58:23.922 [INFO][5041] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" iface="eth0" netns="" Apr 17 23:58:24.006736 containerd[1571]: 2026-04-17 23:58:23.922 [INFO][5041] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" Apr 17 23:58:24.006736 containerd[1571]: 2026-04-17 23:58:23.922 [INFO][5041] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" Apr 17 23:58:24.006736 containerd[1571]: 2026-04-17 23:58:23.980 [INFO][5048] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" HandleID="k8s-pod-network.6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" Workload="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--7mc8b-eth0" Apr 17 23:58:24.006736 containerd[1571]: 2026-04-17 23:58:23.980 [INFO][5048] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:24.006736 containerd[1571]: 2026-04-17 23:58:23.980 [INFO][5048] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:24.006736 containerd[1571]: 2026-04-17 23:58:23.992 [WARNING][5048] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" HandleID="k8s-pod-network.6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" Workload="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--7mc8b-eth0" Apr 17 23:58:24.006736 containerd[1571]: 2026-04-17 23:58:23.992 [INFO][5048] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" HandleID="k8s-pod-network.6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" Workload="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--7mc8b-eth0" Apr 17 23:58:24.006736 containerd[1571]: 2026-04-17 23:58:23.994 [INFO][5048] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:24.006736 containerd[1571]: 2026-04-17 23:58:23.999 [INFO][5041] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" Apr 17 23:58:24.008828 containerd[1571]: time="2026-04-17T23:58:24.008538901Z" level=info msg="TearDown network for sandbox \"6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79\" successfully" Apr 17 23:58:24.008828 containerd[1571]: time="2026-04-17T23:58:24.008575623Z" level=info msg="StopPodSandbox for \"6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79\" returns successfully" Apr 17 23:58:24.009899 containerd[1571]: time="2026-04-17T23:58:24.009631097Z" level=info msg="RemovePodSandbox for \"6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79\"" Apr 17 23:58:24.009899 containerd[1571]: time="2026-04-17T23:58:24.009673129Z" level=info msg="Forcibly stopping sandbox \"6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79\"" Apr 17 23:58:24.122762 containerd[1571]: 2026-04-17 23:58:24.067 [WARNING][5062] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--7mc8b-eth0", GenerateName:"calico-apiserver-6fff8bdfbc-", Namespace:"calico-system", SelfLink:"", UID:"d9a00af0-47b3-4d37-9e23-a56a19b9db0e", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fff8bdfbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"310a3375670da0ced3c2181a4be7e742c1a3379530ac223ab665a7283f99ae70", Pod:"calico-apiserver-6fff8bdfbc-7mc8b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1d27b58af55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:24.122762 containerd[1571]: 2026-04-17 23:58:24.067 [INFO][5062] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" Apr 17 23:58:24.122762 containerd[1571]: 2026-04-17 23:58:24.067 [INFO][5062] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" iface="eth0" netns="" Apr 17 23:58:24.122762 containerd[1571]: 2026-04-17 23:58:24.067 [INFO][5062] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" Apr 17 23:58:24.122762 containerd[1571]: 2026-04-17 23:58:24.067 [INFO][5062] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" Apr 17 23:58:24.122762 containerd[1571]: 2026-04-17 23:58:24.106 [INFO][5069] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" HandleID="k8s-pod-network.6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" Workload="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--7mc8b-eth0" Apr 17 23:58:24.122762 containerd[1571]: 2026-04-17 23:58:24.106 [INFO][5069] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:24.122762 containerd[1571]: 2026-04-17 23:58:24.106 [INFO][5069] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:24.122762 containerd[1571]: 2026-04-17 23:58:24.114 [WARNING][5069] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" HandleID="k8s-pod-network.6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" Workload="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--7mc8b-eth0" Apr 17 23:58:24.122762 containerd[1571]: 2026-04-17 23:58:24.114 [INFO][5069] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" HandleID="k8s-pod-network.6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" Workload="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--7mc8b-eth0" Apr 17 23:58:24.122762 containerd[1571]: 2026-04-17 23:58:24.116 [INFO][5069] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:24.122762 containerd[1571]: 2026-04-17 23:58:24.118 [INFO][5062] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79" Apr 17 23:58:24.124391 containerd[1571]: time="2026-04-17T23:58:24.123276561Z" level=info msg="TearDown network for sandbox \"6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79\" successfully" Apr 17 23:58:24.130175 containerd[1571]: time="2026-04-17T23:58:24.130139003Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:58:24.130263 containerd[1571]: time="2026-04-17T23:58:24.130201027Z" level=info msg="RemovePodSandbox \"6cc9634b2745563ff5bf372911f316571c5e6d3c0df27e2ca5cb63d3a7958a79\" returns successfully" Apr 17 23:58:24.131078 containerd[1571]: time="2026-04-17T23:58:24.130769856Z" level=info msg="StopPodSandbox for \"71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198\"" Apr 17 23:58:24.205070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2517086227.mount: Deactivated successfully. Apr 17 23:58:24.274838 containerd[1571]: 2026-04-17 23:58:24.197 [WARNING][5084] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" WorkloadEndpoint="172--232--15--112-k8s-whisker--6db78f5767--pb686-eth0" Apr 17 23:58:24.274838 containerd[1571]: 2026-04-17 23:58:24.197 [INFO][5084] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" Apr 17 23:58:24.274838 containerd[1571]: 2026-04-17 23:58:24.197 [INFO][5084] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" iface="eth0" netns="" Apr 17 23:58:24.274838 containerd[1571]: 2026-04-17 23:58:24.197 [INFO][5084] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" Apr 17 23:58:24.274838 containerd[1571]: 2026-04-17 23:58:24.197 [INFO][5084] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" Apr 17 23:58:24.274838 containerd[1571]: 2026-04-17 23:58:24.256 [INFO][5091] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" HandleID="k8s-pod-network.71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" Workload="172--232--15--112-k8s-whisker--6db78f5767--pb686-eth0" Apr 17 23:58:24.274838 containerd[1571]: 2026-04-17 23:58:24.257 [INFO][5091] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:24.274838 containerd[1571]: 2026-04-17 23:58:24.258 [INFO][5091] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:24.274838 containerd[1571]: 2026-04-17 23:58:24.266 [WARNING][5091] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" HandleID="k8s-pod-network.71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" Workload="172--232--15--112-k8s-whisker--6db78f5767--pb686-eth0" Apr 17 23:58:24.274838 containerd[1571]: 2026-04-17 23:58:24.266 [INFO][5091] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" HandleID="k8s-pod-network.71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" Workload="172--232--15--112-k8s-whisker--6db78f5767--pb686-eth0" Apr 17 23:58:24.274838 containerd[1571]: 2026-04-17 23:58:24.268 [INFO][5091] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:24.274838 containerd[1571]: 2026-04-17 23:58:24.271 [INFO][5084] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" Apr 17 23:58:24.274838 containerd[1571]: time="2026-04-17T23:58:24.274680724Z" level=info msg="TearDown network for sandbox \"71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198\" successfully" Apr 17 23:58:24.274838 containerd[1571]: time="2026-04-17T23:58:24.274708855Z" level=info msg="StopPodSandbox for \"71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198\" returns successfully" Apr 17 23:58:24.276074 containerd[1571]: time="2026-04-17T23:58:24.276042564Z" level=info msg="RemovePodSandbox for \"71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198\"" Apr 17 23:58:24.276156 containerd[1571]: time="2026-04-17T23:58:24.276083346Z" level=info msg="Forcibly stopping sandbox \"71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198\"" Apr 17 23:58:24.399516 containerd[1571]: 2026-04-17 23:58:24.340 [WARNING][5110] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" WorkloadEndpoint="172--232--15--112-k8s-whisker--6db78f5767--pb686-eth0" Apr 17 23:58:24.399516 containerd[1571]: 2026-04-17 23:58:24.340 [INFO][5110] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" Apr 17 23:58:24.399516 containerd[1571]: 2026-04-17 23:58:24.341 [INFO][5110] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" iface="eth0" netns="" Apr 17 23:58:24.399516 containerd[1571]: 2026-04-17 23:58:24.341 [INFO][5110] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" Apr 17 23:58:24.399516 containerd[1571]: 2026-04-17 23:58:24.341 [INFO][5110] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" Apr 17 23:58:24.399516 containerd[1571]: 2026-04-17 23:58:24.375 [INFO][5117] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" HandleID="k8s-pod-network.71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" Workload="172--232--15--112-k8s-whisker--6db78f5767--pb686-eth0" Apr 17 23:58:24.399516 containerd[1571]: 2026-04-17 23:58:24.375 [INFO][5117] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:24.399516 containerd[1571]: 2026-04-17 23:58:24.375 [INFO][5117] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:24.399516 containerd[1571]: 2026-04-17 23:58:24.388 [WARNING][5117] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" HandleID="k8s-pod-network.71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" Workload="172--232--15--112-k8s-whisker--6db78f5767--pb686-eth0" Apr 17 23:58:24.399516 containerd[1571]: 2026-04-17 23:58:24.388 [INFO][5117] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" HandleID="k8s-pod-network.71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" Workload="172--232--15--112-k8s-whisker--6db78f5767--pb686-eth0" Apr 17 23:58:24.399516 containerd[1571]: 2026-04-17 23:58:24.394 [INFO][5117] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:24.399516 containerd[1571]: 2026-04-17 23:58:24.396 [INFO][5110] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198" Apr 17 23:58:24.400292 containerd[1571]: time="2026-04-17T23:58:24.400074111Z" level=info msg="TearDown network for sandbox \"71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198\" successfully" Apr 17 23:58:24.409324 containerd[1571]: time="2026-04-17T23:58:24.409289764Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:58:24.410204 containerd[1571]: time="2026-04-17T23:58:24.410028102Z" level=info msg="RemovePodSandbox \"71b762fb50d2fe017971e61d7ff772f09e13c804fd43bf13e74efe19a5239198\" returns successfully" Apr 17 23:58:24.414074 containerd[1571]: time="2026-04-17T23:58:24.413007005Z" level=info msg="StopPodSandbox for \"aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5\"" Apr 17 23:58:24.546642 containerd[1571]: 2026-04-17 23:58:24.487 [WARNING][5131] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-csi--node--driver--xt426-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8b300517-4fdc-4aae-b868-e2f538976f49", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 57, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"e5c684aacd62dba46b591ec032eb7a7a2d141bd49d4cbae9abdb676c12c15bb6", Pod:"csi-node-driver-xt426", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.119.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali02d38c04f0e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:24.546642 containerd[1571]: 2026-04-17 23:58:24.488 [INFO][5131] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" Apr 17 23:58:24.546642 containerd[1571]: 2026-04-17 23:58:24.488 [INFO][5131] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" iface="eth0" netns="" Apr 17 23:58:24.546642 containerd[1571]: 2026-04-17 23:58:24.488 [INFO][5131] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" Apr 17 23:58:24.546642 containerd[1571]: 2026-04-17 23:58:24.488 [INFO][5131] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" Apr 17 23:58:24.546642 containerd[1571]: 2026-04-17 23:58:24.528 [INFO][5138] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" HandleID="k8s-pod-network.aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" Workload="172--232--15--112-k8s-csi--node--driver--xt426-eth0" Apr 17 23:58:24.546642 containerd[1571]: 2026-04-17 23:58:24.529 [INFO][5138] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:24.546642 containerd[1571]: 2026-04-17 23:58:24.529 [INFO][5138] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:24.546642 containerd[1571]: 2026-04-17 23:58:24.537 [WARNING][5138] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" HandleID="k8s-pod-network.aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" Workload="172--232--15--112-k8s-csi--node--driver--xt426-eth0" Apr 17 23:58:24.546642 containerd[1571]: 2026-04-17 23:58:24.537 [INFO][5138] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" HandleID="k8s-pod-network.aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" Workload="172--232--15--112-k8s-csi--node--driver--xt426-eth0" Apr 17 23:58:24.546642 containerd[1571]: 2026-04-17 23:58:24.539 [INFO][5138] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:24.546642 containerd[1571]: 2026-04-17 23:58:24.542 [INFO][5131] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" Apr 17 23:58:24.547812 containerd[1571]: time="2026-04-17T23:58:24.546671727Z" level=info msg="TearDown network for sandbox \"aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5\" successfully" Apr 17 23:58:24.547812 containerd[1571]: time="2026-04-17T23:58:24.546698018Z" level=info msg="StopPodSandbox for \"aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5\" returns successfully" Apr 17 23:58:24.548109 containerd[1571]: time="2026-04-17T23:58:24.548084979Z" level=info msg="RemovePodSandbox for \"aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5\"" Apr 17 23:58:24.548161 containerd[1571]: time="2026-04-17T23:58:24.548135272Z" level=info msg="Forcibly stopping sandbox \"aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5\"" Apr 17 23:58:24.736274 containerd[1571]: 2026-04-17 23:58:24.654 [WARNING][5153] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-csi--node--driver--xt426-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8b300517-4fdc-4aae-b868-e2f538976f49", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 57, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"e5c684aacd62dba46b591ec032eb7a7a2d141bd49d4cbae9abdb676c12c15bb6", Pod:"csi-node-driver-xt426", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.119.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali02d38c04f0e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:24.736274 containerd[1571]: 2026-04-17 23:58:24.654 [INFO][5153] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" Apr 17 23:58:24.736274 containerd[1571]: 2026-04-17 23:58:24.654 [INFO][5153] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" iface="eth0" netns="" Apr 17 23:58:24.736274 containerd[1571]: 2026-04-17 23:58:24.654 [INFO][5153] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" Apr 17 23:58:24.736274 containerd[1571]: 2026-04-17 23:58:24.654 [INFO][5153] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" Apr 17 23:58:24.736274 containerd[1571]: 2026-04-17 23:58:24.705 [INFO][5161] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" HandleID="k8s-pod-network.aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" Workload="172--232--15--112-k8s-csi--node--driver--xt426-eth0" Apr 17 23:58:24.736274 containerd[1571]: 2026-04-17 23:58:24.706 [INFO][5161] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:24.736274 containerd[1571]: 2026-04-17 23:58:24.706 [INFO][5161] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:24.736274 containerd[1571]: 2026-04-17 23:58:24.714 [WARNING][5161] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" HandleID="k8s-pod-network.aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" Workload="172--232--15--112-k8s-csi--node--driver--xt426-eth0" Apr 17 23:58:24.736274 containerd[1571]: 2026-04-17 23:58:24.714 [INFO][5161] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" HandleID="k8s-pod-network.aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" Workload="172--232--15--112-k8s-csi--node--driver--xt426-eth0" Apr 17 23:58:24.736274 containerd[1571]: 2026-04-17 23:58:24.716 [INFO][5161] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:24.736274 containerd[1571]: 2026-04-17 23:58:24.723 [INFO][5153] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5" Apr 17 23:58:24.736274 containerd[1571]: time="2026-04-17T23:58:24.735750713Z" level=info msg="TearDown network for sandbox \"aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5\" successfully" Apr 17 23:58:24.744781 containerd[1571]: time="2026-04-17T23:58:24.744475291Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:58:24.744781 containerd[1571]: time="2026-04-17T23:58:24.744568636Z" level=info msg="RemovePodSandbox \"aee99e7511f5ee035b367055c3024dc0dc44f2ecb55582f0d383957155906aa5\" returns successfully" Apr 17 23:58:24.746540 containerd[1571]: time="2026-04-17T23:58:24.746024801Z" level=info msg="StopPodSandbox for \"ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827\"" Apr 17 23:58:24.938775 containerd[1571]: 2026-04-17 23:58:24.834 [WARNING][5175] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-calico--kube--controllers--5b464cddd7--wff8q-eth0", GenerateName:"calico-kube-controllers-5b464cddd7-", Namespace:"calico-system", SelfLink:"", UID:"83eecf55-6f3a-4d4f-964b-d260567b14a7", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 57, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b464cddd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"c8b3753e3c35c8af722a3b6463ffc111ece9d3260bc94d8bacf9feb5c0bf3662", Pod:"calico-kube-controllers-5b464cddd7-wff8q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.119.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7b9e7ab5f2a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:24.938775 containerd[1571]: 2026-04-17 23:58:24.835 [INFO][5175] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" Apr 17 23:58:24.938775 containerd[1571]: 2026-04-17 23:58:24.835 [INFO][5175] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" iface="eth0" netns="" Apr 17 23:58:24.938775 containerd[1571]: 2026-04-17 23:58:24.836 [INFO][5175] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" Apr 17 23:58:24.938775 containerd[1571]: 2026-04-17 23:58:24.836 [INFO][5175] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" Apr 17 23:58:24.938775 containerd[1571]: 2026-04-17 23:58:24.907 [INFO][5182] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" HandleID="k8s-pod-network.ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" Workload="172--232--15--112-k8s-calico--kube--controllers--5b464cddd7--wff8q-eth0" Apr 17 23:58:24.938775 containerd[1571]: 2026-04-17 23:58:24.910 [INFO][5182] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:24.938775 containerd[1571]: 2026-04-17 23:58:24.910 [INFO][5182] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:24.938775 containerd[1571]: 2026-04-17 23:58:24.923 [WARNING][5182] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" HandleID="k8s-pod-network.ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" Workload="172--232--15--112-k8s-calico--kube--controllers--5b464cddd7--wff8q-eth0" Apr 17 23:58:24.938775 containerd[1571]: 2026-04-17 23:58:24.925 [INFO][5182] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" HandleID="k8s-pod-network.ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" Workload="172--232--15--112-k8s-calico--kube--controllers--5b464cddd7--wff8q-eth0" Apr 17 23:58:24.938775 containerd[1571]: 2026-04-17 23:58:24.927 [INFO][5182] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:24.938775 containerd[1571]: 2026-04-17 23:58:24.934 [INFO][5175] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" Apr 17 23:58:24.940375 containerd[1571]: time="2026-04-17T23:58:24.940238651Z" level=info msg="TearDown network for sandbox \"ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827\" successfully" Apr 17 23:58:24.940515 containerd[1571]: time="2026-04-17T23:58:24.940448842Z" level=info msg="StopPodSandbox for \"ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827\" returns successfully" Apr 17 23:58:24.941948 containerd[1571]: time="2026-04-17T23:58:24.941856734Z" level=info msg="RemovePodSandbox for \"ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827\"" Apr 17 23:58:24.942277 containerd[1571]: time="2026-04-17T23:58:24.941902186Z" level=info msg="Forcibly stopping sandbox \"ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827\"" Apr 17 23:58:24.993270 containerd[1571]: time="2026-04-17T23:58:24.992467252Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:24.995454 containerd[1571]: time="2026-04-17T23:58:24.995261576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 17 23:58:24.997051 containerd[1571]: time="2026-04-17T23:58:24.996373693Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:24.999913 containerd[1571]: time="2026-04-17T23:58:24.999887133Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:25.000858 containerd[1571]: time="2026-04-17T23:58:25.000574548Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.607868239s" Apr 17 23:58:25.001541 containerd[1571]: time="2026-04-17T23:58:25.001516806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 17 23:58:25.006203 containerd[1571]: time="2026-04-17T23:58:25.006086145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:58:25.012593 containerd[1571]: time="2026-04-17T23:58:25.012542787Z" level=info msg="CreateContainer within sandbox \"abd001759dac0d2b45d96c156e36424f5ee4f5570966c6d79ce225fd8923bd04\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 17 23:58:25.037375 containerd[1571]: time="2026-04-17T23:58:25.037217161Z" level=info msg="CreateContainer within sandbox \"abd001759dac0d2b45d96c156e36424f5ee4f5570966c6d79ce225fd8923bd04\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"f1d30322c7141e2f702eb5906bc71ba6421e0e8b01e2a59a0a3047b1cfea406f\"" Apr 17 23:58:25.047769 containerd[1571]: time="2026-04-17T23:58:25.047289544Z" level=info msg="StartContainer for \"f1d30322c7141e2f702eb5906bc71ba6421e0e8b01e2a59a0a3047b1cfea406f\"" Apr 17 23:58:25.073659 containerd[1571]: 2026-04-17 23:58:25.012 [WARNING][5197] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-calico--kube--controllers--5b464cddd7--wff8q-eth0", GenerateName:"calico-kube-controllers-5b464cddd7-", Namespace:"calico-system", SelfLink:"", UID:"83eecf55-6f3a-4d4f-964b-d260567b14a7", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 57, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b464cddd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"c8b3753e3c35c8af722a3b6463ffc111ece9d3260bc94d8bacf9feb5c0bf3662", Pod:"calico-kube-controllers-5b464cddd7-wff8q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.119.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7b9e7ab5f2a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:25.073659 containerd[1571]: 2026-04-17 23:58:25.012 [INFO][5197] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" Apr 17 23:58:25.073659 containerd[1571]: 2026-04-17 23:58:25.012 [INFO][5197] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" iface="eth0" netns="" Apr 17 23:58:25.073659 containerd[1571]: 2026-04-17 23:58:25.012 [INFO][5197] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" Apr 17 23:58:25.073659 containerd[1571]: 2026-04-17 23:58:25.012 [INFO][5197] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" Apr 17 23:58:25.073659 containerd[1571]: 2026-04-17 23:58:25.056 [INFO][5208] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" HandleID="k8s-pod-network.ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" Workload="172--232--15--112-k8s-calico--kube--controllers--5b464cddd7--wff8q-eth0" Apr 17 23:58:25.073659 containerd[1571]: 2026-04-17 23:58:25.057 [INFO][5208] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:25.073659 containerd[1571]: 2026-04-17 23:58:25.057 [INFO][5208] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:25.073659 containerd[1571]: 2026-04-17 23:58:25.064 [WARNING][5208] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" HandleID="k8s-pod-network.ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" Workload="172--232--15--112-k8s-calico--kube--controllers--5b464cddd7--wff8q-eth0" Apr 17 23:58:25.073659 containerd[1571]: 2026-04-17 23:58:25.064 [INFO][5208] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" HandleID="k8s-pod-network.ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" Workload="172--232--15--112-k8s-calico--kube--controllers--5b464cddd7--wff8q-eth0" Apr 17 23:58:25.073659 containerd[1571]: 2026-04-17 23:58:25.067 [INFO][5208] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:25.073659 containerd[1571]: 2026-04-17 23:58:25.070 [INFO][5197] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827" Apr 17 23:58:25.075144 containerd[1571]: time="2026-04-17T23:58:25.074199959Z" level=info msg="TearDown network for sandbox \"ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827\" successfully" Apr 17 23:58:25.081815 containerd[1571]: time="2026-04-17T23:58:25.081784598Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:58:25.081989 containerd[1571]: time="2026-04-17T23:58:25.081970557Z" level=info msg="RemovePodSandbox \"ac5ee35ff38a084cb18c41417f72f4fd3b923cf3311928d1723b361a8fdb1827\" returns successfully" Apr 17 23:58:25.082685 containerd[1571]: time="2026-04-17T23:58:25.082663392Z" level=info msg="StopPodSandbox for \"6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4\"" Apr 17 23:58:25.206744 containerd[1571]: time="2026-04-17T23:58:25.206693181Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:25.207234 containerd[1571]: time="2026-04-17T23:58:25.207180955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 17 23:58:25.223492 containerd[1571]: time="2026-04-17T23:58:25.223437518Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 215.49812ms" Apr 17 23:58:25.223607 containerd[1571]: time="2026-04-17T23:58:25.223505631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:58:25.224132 containerd[1571]: 2026-04-17 23:58:25.144 [WARNING][5230] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-coredns--674b8bbfcf--88tx2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0c48190d-40ea-4a8e-95db-e61cbffe8eda", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 57, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"1a97d9972bf6ffdaea518011fa81275a6e2f90172695e0bbeff2d2f9a39e7a54", Pod:"coredns-674b8bbfcf-88tx2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3ef05c0435", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:25.224132 containerd[1571]: 2026-04-17 23:58:25.145 [INFO][5230] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" Apr 17 23:58:25.224132 containerd[1571]: 2026-04-17 23:58:25.145 [INFO][5230] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" iface="eth0" netns="" Apr 17 23:58:25.224132 containerd[1571]: 2026-04-17 23:58:25.145 [INFO][5230] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" Apr 17 23:58:25.224132 containerd[1571]: 2026-04-17 23:58:25.145 [INFO][5230] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" Apr 17 23:58:25.224132 containerd[1571]: 2026-04-17 23:58:25.192 [INFO][5252] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" HandleID="k8s-pod-network.6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" Workload="172--232--15--112-k8s-coredns--674b8bbfcf--88tx2-eth0" Apr 17 23:58:25.224132 containerd[1571]: 2026-04-17 23:58:25.192 [INFO][5252] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:25.224132 containerd[1571]: 2026-04-17 23:58:25.192 [INFO][5252] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:25.224132 containerd[1571]: 2026-04-17 23:58:25.200 [WARNING][5252] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" HandleID="k8s-pod-network.6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" Workload="172--232--15--112-k8s-coredns--674b8bbfcf--88tx2-eth0" Apr 17 23:58:25.224132 containerd[1571]: 2026-04-17 23:58:25.200 [INFO][5252] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" HandleID="k8s-pod-network.6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" Workload="172--232--15--112-k8s-coredns--674b8bbfcf--88tx2-eth0" Apr 17 23:58:25.224132 containerd[1571]: 2026-04-17 23:58:25.206 [INFO][5252] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:25.224132 containerd[1571]: 2026-04-17 23:58:25.217 [INFO][5230] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" Apr 17 23:58:25.224132 containerd[1571]: time="2026-04-17T23:58:25.223706121Z" level=info msg="TearDown network for sandbox \"6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4\" successfully" Apr 17 23:58:25.224132 containerd[1571]: time="2026-04-17T23:58:25.223719552Z" level=info msg="StopPodSandbox for \"6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4\" returns successfully" Apr 17 23:58:25.227333 containerd[1571]: time="2026-04-17T23:58:25.227265049Z" level=info msg="RemovePodSandbox for \"6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4\"" Apr 17 23:58:25.227384 containerd[1571]: time="2026-04-17T23:58:25.227334773Z" level=info msg="Forcibly stopping sandbox \"6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4\"" Apr 17 23:58:25.230500 containerd[1571]: time="2026-04-17T23:58:25.230441678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 17 23:58:25.240809 containerd[1571]: time="2026-04-17T23:58:25.240065929Z" level=info msg="StartContainer for \"f1d30322c7141e2f702eb5906bc71ba6421e0e8b01e2a59a0a3047b1cfea406f\" returns successfully" Apr 17 23:58:25.241385 containerd[1571]: time="2026-04-17T23:58:25.241346343Z" level=info msg="CreateContainer within sandbox \"cdab7321252d17c2b83bbd2e5a869ee49924681cd3a5da45cedde3052125551b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:58:25.259582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2089268361.mount: Deactivated successfully. Apr 17 23:58:25.262567 containerd[1571]: time="2026-04-17T23:58:25.261571504Z" level=info msg="CreateContainer within sandbox \"cdab7321252d17c2b83bbd2e5a869ee49924681cd3a5da45cedde3052125551b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"76fcac19d2dc1ed6e994cf7f7af5b1cc0f97a31f437bc46aadd0372f4cc056a2\"" Apr 17 23:58:25.262567 containerd[1571]: time="2026-04-17T23:58:25.262515471Z" level=info msg="StartContainer for \"76fcac19d2dc1ed6e994cf7f7af5b1cc0f97a31f437bc46aadd0372f4cc056a2\"" Apr 17 23:58:25.372035 containerd[1571]: 2026-04-17 23:58:25.321 [WARNING][5277] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-coredns--674b8bbfcf--88tx2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0c48190d-40ea-4a8e-95db-e61cbffe8eda", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 57, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"1a97d9972bf6ffdaea518011fa81275a6e2f90172695e0bbeff2d2f9a39e7a54", Pod:"coredns-674b8bbfcf-88tx2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3ef05c0435", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:25.372035 containerd[1571]: 2026-04-17 23:58:25.322 [INFO][5277] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" Apr 17 23:58:25.372035 containerd[1571]: 2026-04-17 23:58:25.322 [INFO][5277] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" iface="eth0" netns="" Apr 17 23:58:25.372035 containerd[1571]: 2026-04-17 23:58:25.322 [INFO][5277] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" Apr 17 23:58:25.372035 containerd[1571]: 2026-04-17 23:58:25.322 [INFO][5277] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" Apr 17 23:58:25.372035 containerd[1571]: 2026-04-17 23:58:25.354 [INFO][5310] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" HandleID="k8s-pod-network.6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" Workload="172--232--15--112-k8s-coredns--674b8bbfcf--88tx2-eth0" Apr 17 23:58:25.372035 containerd[1571]: 2026-04-17 23:58:25.355 [INFO][5310] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:25.372035 containerd[1571]: 2026-04-17 23:58:25.356 [INFO][5310] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:25.372035 containerd[1571]: 2026-04-17 23:58:25.361 [WARNING][5310] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" HandleID="k8s-pod-network.6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" Workload="172--232--15--112-k8s-coredns--674b8bbfcf--88tx2-eth0" Apr 17 23:58:25.372035 containerd[1571]: 2026-04-17 23:58:25.362 [INFO][5310] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" HandleID="k8s-pod-network.6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" Workload="172--232--15--112-k8s-coredns--674b8bbfcf--88tx2-eth0" Apr 17 23:58:25.372035 containerd[1571]: 2026-04-17 23:58:25.364 [INFO][5310] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:25.372035 containerd[1571]: 2026-04-17 23:58:25.369 [INFO][5277] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4" Apr 17 23:58:25.372491 containerd[1571]: time="2026-04-17T23:58:25.372067567Z" level=info msg="TearDown network for sandbox \"6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4\" successfully" Apr 17 23:58:25.376306 containerd[1571]: time="2026-04-17T23:58:25.376241365Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:58:25.376397 containerd[1571]: time="2026-04-17T23:58:25.376371742Z" level=info msg="RemovePodSandbox \"6881f312076be9db2a4df0fe01e8a1368f2b6b78440d9118744affb6df2d38e4\" returns successfully" Apr 17 23:58:25.380461 containerd[1571]: time="2026-04-17T23:58:25.380289647Z" level=info msg="StopPodSandbox for \"fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5\"" Apr 17 23:58:25.412632 containerd[1571]: time="2026-04-17T23:58:25.412595652Z" level=info msg="StartContainer for \"76fcac19d2dc1ed6e994cf7f7af5b1cc0f97a31f437bc46aadd0372f4cc056a2\" returns successfully" Apr 17 23:58:25.474485 containerd[1571]: 2026-04-17 23:58:25.430 [WARNING][5326] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--mvj4v-eth0", GenerateName:"calico-apiserver-6fff8bdfbc-", Namespace:"calico-system", SelfLink:"", UID:"47346385-5f17-4c08-b4a3-a3958d4f414b", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fff8bdfbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"cdab7321252d17c2b83bbd2e5a869ee49924681cd3a5da45cedde3052125551b", Pod:"calico-apiserver-6fff8bdfbc-mvj4v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali825d1771b61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:25.474485 containerd[1571]: 2026-04-17 23:58:25.431 [INFO][5326] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" Apr 17 23:58:25.474485 containerd[1571]: 2026-04-17 23:58:25.431 [INFO][5326] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" iface="eth0" netns="" Apr 17 23:58:25.474485 containerd[1571]: 2026-04-17 23:58:25.431 [INFO][5326] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" Apr 17 23:58:25.474485 containerd[1571]: 2026-04-17 23:58:25.431 [INFO][5326] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" Apr 17 23:58:25.474485 containerd[1571]: 2026-04-17 23:58:25.461 [INFO][5342] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" HandleID="k8s-pod-network.fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" Workload="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--mvj4v-eth0" Apr 17 23:58:25.474485 containerd[1571]: 2026-04-17 23:58:25.461 [INFO][5342] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:25.474485 containerd[1571]: 2026-04-17 23:58:25.461 [INFO][5342] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:25.474485 containerd[1571]: 2026-04-17 23:58:25.466 [WARNING][5342] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" HandleID="k8s-pod-network.fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" Workload="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--mvj4v-eth0" Apr 17 23:58:25.474485 containerd[1571]: 2026-04-17 23:58:25.467 [INFO][5342] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" HandleID="k8s-pod-network.fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" Workload="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--mvj4v-eth0" Apr 17 23:58:25.474485 containerd[1571]: 2026-04-17 23:58:25.469 [INFO][5342] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:25.474485 containerd[1571]: 2026-04-17 23:58:25.472 [INFO][5326] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" Apr 17 23:58:25.474485 containerd[1571]: time="2026-04-17T23:58:25.474442863Z" level=info msg="TearDown network for sandbox \"fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5\" successfully" Apr 17 23:58:25.474485 containerd[1571]: time="2026-04-17T23:58:25.474468015Z" level=info msg="StopPodSandbox for \"fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5\" returns successfully" Apr 17 23:58:25.477157 containerd[1571]: time="2026-04-17T23:58:25.474952269Z" level=info msg="RemovePodSandbox for \"fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5\"" Apr 17 23:58:25.477157 containerd[1571]: time="2026-04-17T23:58:25.474978750Z" level=info msg="Forcibly stopping sandbox \"fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5\"" Apr 17 23:58:25.603574 containerd[1571]: 2026-04-17 23:58:25.532 [WARNING][5357] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--mvj4v-eth0", GenerateName:"calico-apiserver-6fff8bdfbc-", Namespace:"calico-system", SelfLink:"", UID:"47346385-5f17-4c08-b4a3-a3958d4f414b", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fff8bdfbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"cdab7321252d17c2b83bbd2e5a869ee49924681cd3a5da45cedde3052125551b", Pod:"calico-apiserver-6fff8bdfbc-mvj4v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali825d1771b61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:25.603574 containerd[1571]: 2026-04-17 23:58:25.532 [INFO][5357] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" Apr 17 23:58:25.603574 containerd[1571]: 2026-04-17 23:58:25.532 [INFO][5357] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" iface="eth0" netns="" Apr 17 23:58:25.603574 containerd[1571]: 2026-04-17 23:58:25.532 [INFO][5357] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" Apr 17 23:58:25.603574 containerd[1571]: 2026-04-17 23:58:25.532 [INFO][5357] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" Apr 17 23:58:25.603574 containerd[1571]: 2026-04-17 23:58:25.575 [INFO][5369] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" HandleID="k8s-pod-network.fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" Workload="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--mvj4v-eth0" Apr 17 23:58:25.603574 containerd[1571]: 2026-04-17 23:58:25.575 [INFO][5369] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:25.603574 containerd[1571]: 2026-04-17 23:58:25.576 [INFO][5369] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:25.603574 containerd[1571]: 2026-04-17 23:58:25.585 [WARNING][5369] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" HandleID="k8s-pod-network.fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" Workload="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--mvj4v-eth0" Apr 17 23:58:25.603574 containerd[1571]: 2026-04-17 23:58:25.585 [INFO][5369] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" HandleID="k8s-pod-network.fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" Workload="172--232--15--112-k8s-calico--apiserver--6fff8bdfbc--mvj4v-eth0" Apr 17 23:58:25.603574 containerd[1571]: 2026-04-17 23:58:25.589 [INFO][5369] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:25.603574 containerd[1571]: 2026-04-17 23:58:25.597 [INFO][5357] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5" Apr 17 23:58:25.608203 containerd[1571]: time="2026-04-17T23:58:25.605236690Z" level=info msg="TearDown network for sandbox \"fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5\" successfully" Apr 17 23:58:25.612248 containerd[1571]: time="2026-04-17T23:58:25.612183788Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:58:25.612248 containerd[1571]: time="2026-04-17T23:58:25.612245951Z" level=info msg="RemovePodSandbox \"fd48d519002c74cca6be551d8312193730db8e965c041a479f77f2ef372fc4b5\" returns successfully" Apr 17 23:58:25.613188 containerd[1571]: time="2026-04-17T23:58:25.612849321Z" level=info msg="StopPodSandbox for \"0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc\"" Apr 17 23:58:25.692708 containerd[1571]: 2026-04-17 23:58:25.655 [WARNING][5384] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-coredns--674b8bbfcf--rqhzd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4eb46d8d-9eca-422e-ba74-930bbc0b7688", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 57, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"e0223b6ea5028add07969a01644dfb8f413b662e7d0356a14c76295a95bd5dad", Pod:"coredns-674b8bbfcf-rqhzd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide41ec509a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:25.692708 containerd[1571]: 2026-04-17 23:58:25.656 [INFO][5384] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" Apr 17 23:58:25.692708 containerd[1571]: 2026-04-17 23:58:25.656 [INFO][5384] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" iface="eth0" netns="" Apr 17 23:58:25.692708 containerd[1571]: 2026-04-17 23:58:25.656 [INFO][5384] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" Apr 17 23:58:25.692708 containerd[1571]: 2026-04-17 23:58:25.656 [INFO][5384] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" Apr 17 23:58:25.692708 containerd[1571]: 2026-04-17 23:58:25.679 [INFO][5391] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" HandleID="k8s-pod-network.0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" Workload="172--232--15--112-k8s-coredns--674b8bbfcf--rqhzd-eth0" Apr 17 23:58:25.692708 containerd[1571]: 2026-04-17 23:58:25.679 [INFO][5391] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:25.692708 containerd[1571]: 2026-04-17 23:58:25.680 [INFO][5391] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:25.692708 containerd[1571]: 2026-04-17 23:58:25.685 [WARNING][5391] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" HandleID="k8s-pod-network.0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" Workload="172--232--15--112-k8s-coredns--674b8bbfcf--rqhzd-eth0" Apr 17 23:58:25.692708 containerd[1571]: 2026-04-17 23:58:25.685 [INFO][5391] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" HandleID="k8s-pod-network.0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" Workload="172--232--15--112-k8s-coredns--674b8bbfcf--rqhzd-eth0" Apr 17 23:58:25.692708 containerd[1571]: 2026-04-17 23:58:25.687 [INFO][5391] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:25.692708 containerd[1571]: 2026-04-17 23:58:25.689 [INFO][5384] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" Apr 17 23:58:25.692708 containerd[1571]: time="2026-04-17T23:58:25.692445709Z" level=info msg="TearDown network for sandbox \"0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc\" successfully" Apr 17 23:58:25.692708 containerd[1571]: time="2026-04-17T23:58:25.692515393Z" level=info msg="StopPodSandbox for \"0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc\" returns successfully" Apr 17 23:58:25.697426 containerd[1571]: time="2026-04-17T23:58:25.693319143Z" level=info msg="RemovePodSandbox for \"0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc\"" Apr 17 23:58:25.697426 containerd[1571]: time="2026-04-17T23:58:25.693396877Z" level=info msg="Forcibly stopping sandbox \"0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc\"" Apr 17 23:58:25.794769 containerd[1571]: 2026-04-17 23:58:25.744 [WARNING][5405] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--15--112-k8s-coredns--674b8bbfcf--rqhzd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4eb46d8d-9eca-422e-ba74-930bbc0b7688", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 57, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-15-112", ContainerID:"e0223b6ea5028add07969a01644dfb8f413b662e7d0356a14c76295a95bd5dad", Pod:"coredns-674b8bbfcf-rqhzd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide41ec509a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:58:25.794769 containerd[1571]: 2026-04-17 23:58:25.744 [INFO][5405] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" Apr 17 23:58:25.794769 containerd[1571]: 2026-04-17 23:58:25.744 [INFO][5405] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" iface="eth0" netns="" Apr 17 23:58:25.794769 containerd[1571]: 2026-04-17 23:58:25.744 [INFO][5405] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" Apr 17 23:58:25.794769 containerd[1571]: 2026-04-17 23:58:25.745 [INFO][5405] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" Apr 17 23:58:25.794769 containerd[1571]: 2026-04-17 23:58:25.777 [INFO][5412] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" HandleID="k8s-pod-network.0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" Workload="172--232--15--112-k8s-coredns--674b8bbfcf--rqhzd-eth0" Apr 17 23:58:25.794769 containerd[1571]: 2026-04-17 23:58:25.777 [INFO][5412] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:58:25.794769 containerd[1571]: 2026-04-17 23:58:25.777 [INFO][5412] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:58:25.794769 containerd[1571]: 2026-04-17 23:58:25.783 [WARNING][5412] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" HandleID="k8s-pod-network.0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" Workload="172--232--15--112-k8s-coredns--674b8bbfcf--rqhzd-eth0" Apr 17 23:58:25.794769 containerd[1571]: 2026-04-17 23:58:25.783 [INFO][5412] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" HandleID="k8s-pod-network.0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" Workload="172--232--15--112-k8s-coredns--674b8bbfcf--rqhzd-eth0" Apr 17 23:58:25.794769 containerd[1571]: 2026-04-17 23:58:25.785 [INFO][5412] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:58:25.794769 containerd[1571]: 2026-04-17 23:58:25.790 [INFO][5405] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc" Apr 17 23:58:25.794769 containerd[1571]: time="2026-04-17T23:58:25.794282609Z" level=info msg="TearDown network for sandbox \"0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc\" successfully" Apr 17 23:58:25.798229 containerd[1571]: time="2026-04-17T23:58:25.798172613Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:58:25.798325 containerd[1571]: time="2026-04-17T23:58:25.798298960Z" level=info msg="RemovePodSandbox \"0db931edac01b31a6b29b10b6647cbee5e264d9ae5243aa1f8651f95bea01ccc\" returns successfully" Apr 17 23:58:25.861634 kubelet[2741]: I0417 23:58:25.858681 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6fff8bdfbc-7mc8b" podStartSLOduration=38.277113175 podStartE2EDuration="40.858580173s" podCreationTimestamp="2026-04-17 23:57:45 +0000 UTC" firstStartedPulling="2026-04-17 23:58:19.810997139 +0000 UTC m=+56.457245682" lastFinishedPulling="2026-04-17 23:58:22.392464137 +0000 UTC m=+59.038712680" observedRunningTime="2026-04-17 23:58:22.83825909 +0000 UTC m=+59.484507643" watchObservedRunningTime="2026-04-17 23:58:25.858580173 +0000 UTC m=+62.504828726" Apr 17 23:58:25.881878 kubelet[2741]: I0417 23:58:25.880369 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6fff8bdfbc-mvj4v" podStartSLOduration=35.655222857 podStartE2EDuration="40.88034676s" podCreationTimestamp="2026-04-17 23:57:45 +0000 UTC" firstStartedPulling="2026-04-17 23:58:19.99962089 +0000 UTC m=+56.645869433" lastFinishedPulling="2026-04-17 23:58:25.224744793 +0000 UTC m=+61.870993336" observedRunningTime="2026-04-17 23:58:25.863847476 +0000 UTC m=+62.510096029" watchObservedRunningTime="2026-04-17 23:58:25.88034676 +0000 UTC m=+62.526595303" Apr 17 23:58:25.881878 kubelet[2741]: I0417 23:58:25.881623 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-l2kf2" podStartSLOduration=35.741691081 podStartE2EDuration="40.881616324s" podCreationTimestamp="2026-04-17 23:57:45 +0000 UTC" firstStartedPulling="2026-04-17 23:58:19.863778583 +0000 UTC m=+56.510027126" lastFinishedPulling="2026-04-17 23:58:25.003703816 +0000 UTC m=+61.649952369" observedRunningTime="2026-04-17 23:58:25.881044705 +0000 UTC m=+62.527293268" watchObservedRunningTime="2026-04-17 23:58:25.881616324 +0000 UTC m=+62.527864877" Apr 17 23:58:26.191924 containerd[1571]: time="2026-04-17T23:58:26.191880811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:26.193966 containerd[1571]: time="2026-04-17T23:58:26.193925360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 17 23:58:26.194763 containerd[1571]: time="2026-04-17T23:58:26.194740010Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:26.198338 containerd[1571]: time="2026-04-17T23:58:26.197756866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:26.198449 containerd[1571]: time="2026-04-17T23:58:26.198428089Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 967.098187ms" Apr 17 23:58:26.198538 containerd[1571]: time="2026-04-17T23:58:26.198522244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 17 23:58:26.201803 containerd[1571]: time="2026-04-17T23:58:26.201783502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 17 23:58:26.208663 containerd[1571]: time="2026-04-17T23:58:26.208624245Z" level=info msg="CreateContainer within sandbox \"e5c684aacd62dba46b591ec032eb7a7a2d141bd49d4cbae9abdb676c12c15bb6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 17 23:58:26.229995 containerd[1571]: time="2026-04-17T23:58:26.229741743Z" level=info msg="CreateContainer within sandbox \"e5c684aacd62dba46b591ec032eb7a7a2d141bd49d4cbae9abdb676c12c15bb6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"54aff4082441c2676e6d7fcf3da654d5dab7ca4dd14a3a7b00e1474cd835abde\"" Apr 17 23:58:26.233151 containerd[1571]: time="2026-04-17T23:58:26.231699908Z" level=info msg="StartContainer for \"54aff4082441c2676e6d7fcf3da654d5dab7ca4dd14a3a7b00e1474cd835abde\"" Apr 17 23:58:26.321111 systemd[1]: run-containerd-runc-k8s.io-54aff4082441c2676e6d7fcf3da654d5dab7ca4dd14a3a7b00e1474cd835abde-runc.UkaoAW.mount: Deactivated successfully. Apr 17 23:58:26.388288 containerd[1571]: time="2026-04-17T23:58:26.388230926Z" level=info msg="StartContainer for \"54aff4082441c2676e6d7fcf3da654d5dab7ca4dd14a3a7b00e1474cd835abde\" returns successfully" Apr 17 23:58:26.861207 kubelet[2741]: I0417 23:58:26.859719 2741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:58:27.161520 containerd[1571]: time="2026-04-17T23:58:27.161450142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:27.164140 containerd[1571]: time="2026-04-17T23:58:27.162894151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 17 23:58:27.164140 containerd[1571]: time="2026-04-17T23:58:27.163691678Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:27.167651 containerd[1571]: time="2026-04-17T23:58:27.167596443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:27.178597 containerd[1571]: time="2026-04-17T23:58:27.178506531Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 976.494867ms" Apr 17 23:58:27.178597 containerd[1571]: time="2026-04-17T23:58:27.178556143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 17 23:58:27.185841 containerd[1571]: time="2026-04-17T23:58:27.185015359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 17 23:58:27.190573 containerd[1571]: time="2026-04-17T23:58:27.190532601Z" level=info msg="CreateContainer within sandbox \"f1d0a5276d36ead95adddbeba906bbfde58fd4eeb007f06c5f67110221cc9a88\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 23:58:27.203425 containerd[1571]: time="2026-04-17T23:58:27.203381509Z" level=info msg="CreateContainer within sandbox \"f1d0a5276d36ead95adddbeba906bbfde58fd4eeb007f06c5f67110221cc9a88\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ce090c20db70010f8175f05dddff79d440da8c093accfdada3f92243cbac233d\"" Apr 17 23:58:27.206476 containerd[1571]: time="2026-04-17T23:58:27.204263261Z" level=info msg="StartContainer for \"ce090c20db70010f8175f05dddff79d440da8c093accfdada3f92243cbac233d\"" Apr 17 23:58:27.313553 containerd[1571]: time="2026-04-17T23:58:27.313493918Z" level=info msg="StartContainer for \"ce090c20db70010f8175f05dddff79d440da8c093accfdada3f92243cbac233d\" returns successfully" Apr 17 23:58:29.069767 containerd[1571]: time="2026-04-17T23:58:29.069680551Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:29.070938 containerd[1571]: time="2026-04-17T23:58:29.070802192Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 17 23:58:29.072471 containerd[1571]: time="2026-04-17T23:58:29.071470552Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:29.073824 containerd[1571]: time="2026-04-17T23:58:29.073800037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:29.074640 containerd[1571]: time="2026-04-17T23:58:29.074617493Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 1.889570033s" Apr 17 23:58:29.074748 containerd[1571]: time="2026-04-17T23:58:29.074731138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 17 23:58:29.078284 containerd[1571]: time="2026-04-17T23:58:29.077407439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 17 23:58:29.096728 containerd[1571]: time="2026-04-17T23:58:29.095521353Z" level=info msg="CreateContainer within sandbox \"c8b3753e3c35c8af722a3b6463ffc111ece9d3260bc94d8bacf9feb5c0bf3662\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 17 23:58:29.118156 containerd[1571]: time="2026-04-17T23:58:29.117853268Z" level=info msg="CreateContainer within sandbox \"c8b3753e3c35c8af722a3b6463ffc111ece9d3260bc94d8bacf9feb5c0bf3662\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e02c516f9f3eceeeb394ef815f67badc47307c7db5497cd6495fe74a34153d62\"" Apr 17 23:58:29.119818 containerd[1571]: time="2026-04-17T23:58:29.118629972Z" level=info msg="StartContainer for \"e02c516f9f3eceeeb394ef815f67badc47307c7db5497cd6495fe74a34153d62\"" Apr 17 23:58:29.118846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3290516261.mount: Deactivated successfully. Apr 17 23:58:29.218621 containerd[1571]: time="2026-04-17T23:58:29.218494203Z" level=info msg="StartContainer for \"e02c516f9f3eceeeb394ef815f67badc47307c7db5497cd6495fe74a34153d62\" returns successfully" Apr 17 23:58:29.901636 kubelet[2741]: I0417 23:58:29.901541 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5b464cddd7-wff8q" podStartSLOduration=35.497879098 podStartE2EDuration="43.901491515s" podCreationTimestamp="2026-04-17 23:57:46 +0000 UTC" firstStartedPulling="2026-04-17 23:58:20.672067394 +0000 UTC m=+57.318315937" lastFinishedPulling="2026-04-17 23:58:29.075679811 +0000 UTC m=+65.721928354" observedRunningTime="2026-04-17 23:58:29.898589775 +0000 UTC m=+66.544838318" watchObservedRunningTime="2026-04-17 23:58:29.901491515 +0000 UTC m=+66.547740058" Apr 17 23:58:30.334929 containerd[1571]: time="2026-04-17T23:58:30.334036040Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:30.336768 containerd[1571]: time="2026-04-17T23:58:30.336689416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 17 23:58:30.341972 containerd[1571]: time="2026-04-17T23:58:30.341790840Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:30.346711 containerd[1571]: time="2026-04-17T23:58:30.346613521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:30.348169 containerd[1571]: time="2026-04-17T23:58:30.348016452Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.270556501s" Apr 17 23:58:30.348169 containerd[1571]: time="2026-04-17T23:58:30.348157869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 17 23:58:30.352175 containerd[1571]: time="2026-04-17T23:58:30.351893322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 17 23:58:30.360309 containerd[1571]: time="2026-04-17T23:58:30.360261109Z" level=info msg="CreateContainer within sandbox \"e5c684aacd62dba46b591ec032eb7a7a2d141bd49d4cbae9abdb676c12c15bb6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 17 23:58:30.388608 containerd[1571]: time="2026-04-17T23:58:30.388524807Z" level=info msg="CreateContainer within sandbox \"e5c684aacd62dba46b591ec032eb7a7a2d141bd49d4cbae9abdb676c12c15bb6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1d871a84ea2f1856ffd9420587f366d18c918bf2de4414999b1d1f9571db6b95\"" Apr 17 23:58:30.389999 containerd[1571]: time="2026-04-17T23:58:30.389909988Z" level=info msg="StartContainer for \"1d871a84ea2f1856ffd9420587f366d18c918bf2de4414999b1d1f9571db6b95\"" Apr 17 23:58:30.501153 containerd[1571]: time="2026-04-17T23:58:30.501075818Z" level=info msg="StartContainer for \"1d871a84ea2f1856ffd9420587f366d18c918bf2de4414999b1d1f9571db6b95\" returns successfully" Apr 17 23:58:30.906113 kubelet[2741]: I0417 23:58:30.903738 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-xt426" podStartSLOduration=34.853371833 podStartE2EDuration="44.903650204s" podCreationTimestamp="2026-04-17 23:57:46 +0000 UTC" firstStartedPulling="2026-04-17 23:58:20.300246641 +0000 UTC m=+56.946495184" lastFinishedPulling="2026-04-17 23:58:30.350525002 +0000 UTC m=+66.996773555" observedRunningTime="2026-04-17 23:58:30.902897491 +0000 UTC m=+67.549146044" watchObservedRunningTime="2026-04-17 23:58:30.903650204 +0000 UTC m=+67.549898757" Apr 17 23:58:31.402711 kubelet[2741]: I0417 23:58:31.402484 2741 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 17 23:58:31.404865 kubelet[2741]: I0417 23:58:31.404510 2741 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 17 23:58:31.514958 kubelet[2741]: E0417 23:58:31.513421 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:58:31.514427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4007408849.mount: Deactivated successfully. Apr 17 23:58:31.531322 containerd[1571]: time="2026-04-17T23:58:31.531276716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:31.532541 containerd[1571]: time="2026-04-17T23:58:31.532500058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 17 23:58:31.534147 containerd[1571]: time="2026-04-17T23:58:31.533775943Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:31.537538 containerd[1571]: time="2026-04-17T23:58:31.537484361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:31.538951 containerd[1571]: time="2026-04-17T23:58:31.538905372Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.186891184s" Apr 17 23:58:31.539069 containerd[1571]: time="2026-04-17T23:58:31.538955384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 17 23:58:31.547061 containerd[1571]: time="2026-04-17T23:58:31.546987267Z" level=info msg="CreateContainer within sandbox \"f1d0a5276d36ead95adddbeba906bbfde58fd4eeb007f06c5f67110221cc9a88\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 23:58:31.559965 containerd[1571]: time="2026-04-17T23:58:31.559926659Z" level=info msg="CreateContainer within sandbox \"f1d0a5276d36ead95adddbeba906bbfde58fd4eeb007f06c5f67110221cc9a88\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"72e5595f6cd14ef300f7ddbc508fcb2b74930ce99ccf55b3fc2c0762c070fbc2\"" Apr 17 23:58:31.564113 containerd[1571]: time="2026-04-17T23:58:31.563288902Z" level=info msg="StartContainer for \"72e5595f6cd14ef300f7ddbc508fcb2b74930ce99ccf55b3fc2c0762c070fbc2\"" Apr 17 23:58:31.568059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4196151125.mount: Deactivated successfully. Apr 17 23:58:31.693886 containerd[1571]: time="2026-04-17T23:58:31.692700627Z" level=info msg="StartContainer for \"72e5595f6cd14ef300f7ddbc508fcb2b74930ce99ccf55b3fc2c0762c070fbc2\" returns successfully" Apr 17 23:58:31.906153 kubelet[2741]: I0417 23:58:31.906019 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-59bdfdf7b8-ksvgv" podStartSLOduration=3.913189484 podStartE2EDuration="14.905995822s" podCreationTimestamp="2026-04-17 23:58:17 +0000 UTC" firstStartedPulling="2026-04-17 23:58:20.549363963 +0000 UTC m=+57.195612506" lastFinishedPulling="2026-04-17 23:58:31.542170291 +0000 UTC m=+68.188418844" observedRunningTime="2026-04-17 23:58:31.904423835 +0000 UTC m=+68.550672388" watchObservedRunningTime="2026-04-17 23:58:31.905995822 +0000 UTC m=+68.552244375" Apr 17 23:58:41.511576 kubelet[2741]: E0417 23:58:41.511076 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:58:46.381187 kubelet[2741]: I0417 23:58:46.379965 2741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:58:58.514191 kubelet[2741]: E0417 23:58:58.513897 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:58:59.511897 kubelet[2741]: E0417 23:58:59.510454 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:59:00.500432 kubelet[2741]: I0417 23:59:00.500289 2741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:59:23.601038 systemd[1]: run-containerd-runc-k8s.io-f1d30322c7141e2f702eb5906bc71ba6421e0e8b01e2a59a0a3047b1cfea406f-runc.RRQEhI.mount: Deactivated successfully. Apr 17 23:59:38.510840 kubelet[2741]: E0417 23:59:38.510589 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:59:39.511016 kubelet[2741]: E0417 23:59:39.510594 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:59:49.206530 systemd[1]: run-containerd-runc-k8s.io-4ed17383e70a2377d9272ed90da8e3ba4d4deebc768264e63b796827b28841b5-runc.7ikpX8.mount: Deactivated successfully. Apr 17 23:59:49.513699 kubelet[2741]: E0417 23:59:49.513059 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:59:54.510418 kubelet[2741]: E0417 23:59:54.510359 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 17 23:59:57.043698 systemd[1]: Started sshd@7-172.232.15.112:22-50.85.169.122:59270.service - OpenSSH per-connection server daemon (50.85.169.122:59270). Apr 17 23:59:57.666161 sshd[5990]: Accepted publickey for core from 50.85.169.122 port 59270 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 17 23:59:57.672509 sshd[5990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:59:57.691531 systemd-logind[1550]: New session 8 of user core. Apr 17 23:59:57.698149 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 23:59:57.918197 systemd[1]: run-containerd-runc-k8s.io-f1d30322c7141e2f702eb5906bc71ba6421e0e8b01e2a59a0a3047b1cfea406f-runc.jUckEa.mount: Deactivated successfully. Apr 17 23:59:58.326039 sshd[5990]: pam_unix(sshd:session): session closed for user core Apr 17 23:59:58.333874 systemd[1]: sshd@7-172.232.15.112:22-50.85.169.122:59270.service: Deactivated successfully. Apr 17 23:59:58.336829 systemd-logind[1550]: Session 8 logged out. Waiting for processes to exit. Apr 17 23:59:58.344730 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 23:59:58.350347 systemd-logind[1550]: Removed session 8. Apr 17 23:59:59.515164 kubelet[2741]: E0417 23:59:59.514432 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 18 00:00:00.512793 kubelet[2741]: E0418 00:00:00.511581 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 18 00:00:03.431637 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Apr 18 00:00:03.435554 systemd[1]: Started sshd@8-172.232.15.112:22-50.85.169.122:38668.service - OpenSSH per-connection server daemon (50.85.169.122:38668). Apr 18 00:00:03.450954 systemd[1]: logrotate.service: Deactivated successfully. Apr 18 00:00:04.052929 sshd[6081]: Accepted publickey for core from 50.85.169.122 port 38668 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 18 00:00:04.055429 sshd[6081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 18 00:00:04.079687 systemd-logind[1550]: New session 9 of user core. Apr 18 00:00:04.088626 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 18 00:00:04.571584 sshd[6081]: pam_unix(sshd:session): session closed for user core Apr 18 00:00:04.575741 systemd[1]: sshd@8-172.232.15.112:22-50.85.169.122:38668.service: Deactivated successfully. Apr 18 00:00:04.581359 systemd-logind[1550]: Session 9 logged out. Waiting for processes to exit. Apr 18 00:00:04.582030 systemd[1]: session-9.scope: Deactivated successfully. Apr 18 00:00:04.586507 systemd-logind[1550]: Removed session 9. Apr 18 00:00:09.676569 systemd[1]: Started sshd@9-172.232.15.112:22-50.85.169.122:53204.service - OpenSSH per-connection server daemon (50.85.169.122:53204). Apr 18 00:00:10.284255 sshd[6099]: Accepted publickey for core from 50.85.169.122 port 53204 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 18 00:00:10.286076 sshd[6099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 18 00:00:10.291175 systemd-logind[1550]: New session 10 of user core. Apr 18 00:00:10.294475 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 18 00:00:10.778298 sshd[6099]: pam_unix(sshd:session): session closed for user core Apr 18 00:00:10.782615 systemd-logind[1550]: Session 10 logged out. Waiting for processes to exit. Apr 18 00:00:10.783636 systemd[1]: sshd@9-172.232.15.112:22-50.85.169.122:53204.service: Deactivated successfully. Apr 18 00:00:10.789003 systemd[1]: session-10.scope: Deactivated successfully. Apr 18 00:00:10.790100 systemd-logind[1550]: Removed session 10. Apr 18 00:00:13.513666 kubelet[2741]: E0418 00:00:13.513189 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 18 00:00:15.886412 systemd[1]: Started sshd@10-172.232.15.112:22-50.85.169.122:53216.service - OpenSSH per-connection server daemon (50.85.169.122:53216). Apr 18 00:00:16.488888 sshd[6115]: Accepted publickey for core from 50.85.169.122 port 53216 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 18 00:00:16.491189 sshd[6115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 18 00:00:16.496520 systemd-logind[1550]: New session 11 of user core. Apr 18 00:00:16.500833 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 18 00:00:17.025387 sshd[6115]: pam_unix(sshd:session): session closed for user core Apr 18 00:00:17.034358 systemd-logind[1550]: Session 11 logged out. Waiting for processes to exit. Apr 18 00:00:17.035023 systemd[1]: sshd@10-172.232.15.112:22-50.85.169.122:53216.service: Deactivated successfully. Apr 18 00:00:17.042674 systemd[1]: session-11.scope: Deactivated successfully. Apr 18 00:00:17.044080 systemd-logind[1550]: Removed session 11. Apr 18 00:00:22.136961 systemd[1]: Started sshd@11-172.232.15.112:22-50.85.169.122:36308.service - OpenSSH per-connection server daemon (50.85.169.122:36308). Apr 18 00:00:22.768302 sshd[6171]: Accepted publickey for core from 50.85.169.122 port 36308 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 18 00:00:22.771622 sshd[6171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 18 00:00:22.786831 systemd-logind[1550]: New session 12 of user core. Apr 18 00:00:22.795268 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 18 00:00:23.418299 sshd[6171]: pam_unix(sshd:session): session closed for user core Apr 18 00:00:23.427976 systemd[1]: sshd@11-172.232.15.112:22-50.85.169.122:36308.service: Deactivated successfully. Apr 18 00:00:23.429265 systemd-logind[1550]: Session 12 logged out. Waiting for processes to exit. Apr 18 00:00:23.435737 systemd[1]: session-12.scope: Deactivated successfully. Apr 18 00:00:23.439534 systemd-logind[1550]: Removed session 12. Apr 18 00:00:23.554219 systemd[1]: Started sshd@12-172.232.15.112:22-50.85.169.122:36316.service - OpenSSH per-connection server daemon (50.85.169.122:36316). Apr 18 00:00:24.185869 sshd[6188]: Accepted publickey for core from 50.85.169.122 port 36316 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 18 00:00:24.189721 sshd[6188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 18 00:00:24.196375 systemd-logind[1550]: New session 13 of user core. Apr 18 00:00:24.200671 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 18 00:00:24.706924 sshd[6188]: pam_unix(sshd:session): session closed for user core Apr 18 00:00:24.711555 systemd-logind[1550]: Session 13 logged out. Waiting for processes to exit. Apr 18 00:00:24.712702 systemd[1]: sshd@12-172.232.15.112:22-50.85.169.122:36316.service: Deactivated successfully. Apr 18 00:00:24.717517 systemd[1]: session-13.scope: Deactivated successfully. Apr 18 00:00:24.719378 systemd-logind[1550]: Removed session 13. Apr 18 00:00:24.809299 systemd[1]: Started sshd@13-172.232.15.112:22-50.85.169.122:36324.service - OpenSSH per-connection server daemon (50.85.169.122:36324). Apr 18 00:00:25.423750 sshd[6217]: Accepted publickey for core from 50.85.169.122 port 36324 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 18 00:00:25.427929 sshd[6217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 18 00:00:25.436291 systemd-logind[1550]: New session 14 of user core. Apr 18 00:00:25.442688 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 18 00:00:25.928331 sshd[6217]: pam_unix(sshd:session): session closed for user core Apr 18 00:00:25.932858 systemd-logind[1550]: Session 14 logged out. Waiting for processes to exit. Apr 18 00:00:25.933905 systemd[1]: sshd@13-172.232.15.112:22-50.85.169.122:36324.service: Deactivated successfully. Apr 18 00:00:25.938585 systemd[1]: session-14.scope: Deactivated successfully. Apr 18 00:00:25.940298 systemd-logind[1550]: Removed session 14. Apr 18 00:00:29.909621 systemd[1]: run-containerd-runc-k8s.io-e02c516f9f3eceeeb394ef815f67badc47307c7db5497cd6495fe74a34153d62-runc.znGLjz.mount: Deactivated successfully. Apr 18 00:00:31.031398 systemd[1]: Started sshd@14-172.232.15.112:22-50.85.169.122:57622.service - OpenSSH per-connection server daemon (50.85.169.122:57622). Apr 18 00:00:31.638178 sshd[6273]: Accepted publickey for core from 50.85.169.122 port 57622 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 18 00:00:31.640534 sshd[6273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 18 00:00:31.647435 systemd-logind[1550]: New session 15 of user core. Apr 18 00:00:31.655612 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 18 00:00:32.172390 sshd[6273]: pam_unix(sshd:session): session closed for user core Apr 18 00:00:32.178654 systemd[1]: sshd@14-172.232.15.112:22-50.85.169.122:57622.service: Deactivated successfully. Apr 18 00:00:32.188650 systemd[1]: session-15.scope: Deactivated successfully. Apr 18 00:00:32.189745 systemd-logind[1550]: Session 15 logged out. Waiting for processes to exit. Apr 18 00:00:32.190972 systemd-logind[1550]: Removed session 15. Apr 18 00:00:37.279328 systemd[1]: Started sshd@15-172.232.15.112:22-50.85.169.122:57630.service - OpenSSH per-connection server daemon (50.85.169.122:57630). Apr 18 00:00:37.876463 sshd[6287]: Accepted publickey for core from 50.85.169.122 port 57630 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 18 00:00:37.878543 sshd[6287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 18 00:00:37.883473 systemd-logind[1550]: New session 16 of user core. Apr 18 00:00:37.890456 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 18 00:00:38.385043 sshd[6287]: pam_unix(sshd:session): session closed for user core Apr 18 00:00:38.390929 systemd[1]: sshd@15-172.232.15.112:22-50.85.169.122:57630.service: Deactivated successfully. Apr 18 00:00:38.391977 systemd-logind[1550]: Session 16 logged out. Waiting for processes to exit. Apr 18 00:00:38.396028 systemd[1]: session-16.scope: Deactivated successfully. Apr 18 00:00:38.397501 systemd-logind[1550]: Removed session 16. Apr 18 00:00:43.491712 systemd[1]: Started sshd@16-172.232.15.112:22-50.85.169.122:59862.service - OpenSSH per-connection server daemon (50.85.169.122:59862). Apr 18 00:00:44.108503 sshd[6302]: Accepted publickey for core from 50.85.169.122 port 59862 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 18 00:00:44.113442 sshd[6302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 18 00:00:44.120192 systemd-logind[1550]: New session 17 of user core. Apr 18 00:00:44.125437 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 18 00:00:44.612502 sshd[6302]: pam_unix(sshd:session): session closed for user core Apr 18 00:00:44.617783 systemd[1]: sshd@16-172.232.15.112:22-50.85.169.122:59862.service: Deactivated successfully. Apr 18 00:00:44.623765 systemd[1]: session-17.scope: Deactivated successfully. Apr 18 00:00:44.625663 systemd-logind[1550]: Session 17 logged out. Waiting for processes to exit. Apr 18 00:00:44.627585 systemd-logind[1550]: Removed session 17. Apr 18 00:00:44.715360 systemd[1]: Started sshd@17-172.232.15.112:22-50.85.169.122:59866.service - OpenSSH per-connection server daemon (50.85.169.122:59866). Apr 18 00:00:45.313322 sshd[6316]: Accepted publickey for core from 50.85.169.122 port 59866 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 18 00:00:45.315540 sshd[6316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 18 00:00:45.320824 systemd-logind[1550]: New session 18 of user core. Apr 18 00:00:45.327224 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 18 00:00:45.513155 kubelet[2741]: E0418 00:00:45.512997 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 18 00:00:46.002684 sshd[6316]: pam_unix(sshd:session): session closed for user core Apr 18 00:00:46.008556 systemd-logind[1550]: Session 18 logged out. Waiting for processes to exit. Apr 18 00:00:46.009556 systemd[1]: sshd@17-172.232.15.112:22-50.85.169.122:59866.service: Deactivated successfully. Apr 18 00:00:46.014718 systemd[1]: session-18.scope: Deactivated successfully. Apr 18 00:00:46.015844 systemd-logind[1550]: Removed session 18. Apr 18 00:00:46.104605 systemd[1]: Started sshd@18-172.232.15.112:22-50.85.169.122:59876.service - OpenSSH per-connection server daemon (50.85.169.122:59876). Apr 18 00:00:46.720156 sshd[6328]: Accepted publickey for core from 50.85.169.122 port 59876 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 18 00:00:46.721551 sshd[6328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 18 00:00:46.728087 systemd-logind[1550]: New session 19 of user core. Apr 18 00:00:46.731527 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 18 00:00:47.512288 kubelet[2741]: E0418 00:00:47.512228 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 18 00:00:47.912733 sshd[6328]: pam_unix(sshd:session): session closed for user core Apr 18 00:00:47.917849 systemd-logind[1550]: Session 19 logged out. Waiting for processes to exit. Apr 18 00:00:47.922049 systemd[1]: sshd@18-172.232.15.112:22-50.85.169.122:59876.service: Deactivated successfully. Apr 18 00:00:47.933058 systemd[1]: session-19.scope: Deactivated successfully. Apr 18 00:00:47.940553 systemd-logind[1550]: Removed session 19. Apr 18 00:00:48.025630 systemd[1]: Started sshd@19-172.232.15.112:22-50.85.169.122:59892.service - OpenSSH per-connection server daemon (50.85.169.122:59892). Apr 18 00:00:48.653752 sshd[6365]: Accepted publickey for core from 50.85.169.122 port 59892 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 18 00:00:48.656435 sshd[6365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 18 00:00:48.662669 systemd-logind[1550]: New session 20 of user core. Apr 18 00:00:48.668479 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 18 00:00:49.246973 systemd[1]: run-containerd-runc-k8s.io-4ed17383e70a2377d9272ed90da8e3ba4d4deebc768264e63b796827b28841b5-runc.93VsBx.mount: Deactivated successfully. Apr 18 00:00:49.322406 sshd[6365]: pam_unix(sshd:session): session closed for user core Apr 18 00:00:49.329318 systemd[1]: sshd@19-172.232.15.112:22-50.85.169.122:59892.service: Deactivated successfully. Apr 18 00:00:49.344152 systemd[1]: session-20.scope: Deactivated successfully. Apr 18 00:00:49.344505 systemd-logind[1550]: Session 20 logged out. Waiting for processes to exit. Apr 18 00:00:49.356668 systemd-logind[1550]: Removed session 20. Apr 18 00:00:49.423541 systemd[1]: Started sshd@20-172.232.15.112:22-50.85.169.122:59906.service - OpenSSH per-connection server daemon (50.85.169.122:59906). Apr 18 00:00:50.021162 sshd[6407]: Accepted publickey for core from 50.85.169.122 port 59906 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 18 00:00:50.022952 sshd[6407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 18 00:00:50.030773 systemd-logind[1550]: New session 21 of user core. Apr 18 00:00:50.037638 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 18 00:00:50.531636 sshd[6407]: pam_unix(sshd:session): session closed for user core Apr 18 00:00:50.535653 systemd[1]: sshd@20-172.232.15.112:22-50.85.169.122:59906.service: Deactivated successfully. Apr 18 00:00:50.542487 systemd[1]: session-21.scope: Deactivated successfully. Apr 18 00:00:50.543453 systemd-logind[1550]: Session 21 logged out. Waiting for processes to exit. Apr 18 00:00:50.544706 systemd-logind[1550]: Removed session 21. Apr 18 00:00:55.635386 systemd[1]: Started sshd@21-172.232.15.112:22-50.85.169.122:49108.service - OpenSSH per-connection server daemon (50.85.169.122:49108). Apr 18 00:00:56.245273 sshd[6422]: Accepted publickey for core from 50.85.169.122 port 49108 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 18 00:00:56.247145 sshd[6422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 18 00:00:56.253822 systemd-logind[1550]: New session 22 of user core. Apr 18 00:00:56.259428 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 18 00:00:56.741809 sshd[6422]: pam_unix(sshd:session): session closed for user core Apr 18 00:00:56.746405 systemd[1]: sshd@21-172.232.15.112:22-50.85.169.122:49108.service: Deactivated successfully. Apr 18 00:00:56.750406 systemd-logind[1550]: Session 22 logged out. Waiting for processes to exit. Apr 18 00:00:56.750711 systemd[1]: session-22.scope: Deactivated successfully. Apr 18 00:00:56.753351 systemd-logind[1550]: Removed session 22. Apr 18 00:01:01.857432 systemd[1]: Started sshd@22-172.232.15.112:22-50.85.169.122:48846.service - OpenSSH per-connection server daemon (50.85.169.122:48846). Apr 18 00:01:02.466214 sshd[6475]: Accepted publickey for core from 50.85.169.122 port 48846 ssh2: RSA SHA256:ZW8qVYkBY2hwcd9eo7CU3q4bjdO/ekmmqKOoI3qL08U Apr 18 00:01:02.468499 sshd[6475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 18 00:01:02.475054 systemd-logind[1550]: New session 23 of user core. Apr 18 00:01:02.480436 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 18 00:01:02.995209 sshd[6475]: pam_unix(sshd:session): session closed for user core Apr 18 00:01:02.999166 systemd[1]: sshd@22-172.232.15.112:22-50.85.169.122:48846.service: Deactivated successfully. Apr 18 00:01:03.005791 systemd[1]: session-23.scope: Deactivated successfully. Apr 18 00:01:03.006844 systemd-logind[1550]: Session 23 logged out. Waiting for processes to exit. Apr 18 00:01:03.007831 systemd-logind[1550]: Removed session 23.