Apr 24 23:34:08.031955 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 24 22:11:38 -00 2026 Apr 24 23:34:08.031977 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:34:08.031986 kernel: BIOS-provided physical RAM map: Apr 24 23:34:08.031992 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Apr 24 23:34:08.031997 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Apr 24 23:34:08.032013 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 24 23:34:08.032020 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Apr 24 23:34:08.032026 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Apr 24 23:34:08.032031 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 24 23:34:08.032037 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 24 23:34:08.032043 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 24 23:34:08.032049 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 24 23:34:08.032055 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Apr 24 23:34:08.032064 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 24 23:34:08.032071 kernel: NX (Execute Disable) protection: active Apr 24 23:34:08.032078 kernel: APIC: Static calls initialized Apr 24 23:34:08.032084 kernel: SMBIOS 2.8 present. Apr 24 23:34:08.032090 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Apr 24 23:34:08.032097 kernel: Hypervisor detected: KVM Apr 24 23:34:08.032105 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 24 23:34:08.032112 kernel: kvm-clock: using sched offset of 5972091770 cycles Apr 24 23:34:08.032119 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 24 23:34:08.032125 kernel: tsc: Detected 2000.000 MHz processor Apr 24 23:34:08.032132 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 24 23:34:08.032139 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 24 23:34:08.032145 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Apr 24 23:34:08.032152 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 24 23:34:08.032158 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 24 23:34:08.032167 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Apr 24 23:34:08.032173 kernel: Using GB pages for direct mapping Apr 24 23:34:08.032180 kernel: ACPI: Early table checksum verification disabled Apr 24 23:34:08.032186 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Apr 24 23:34:08.032193 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:34:08.032199 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:34:08.032205 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:34:08.032212 kernel: ACPI: FACS 0x000000007FFE0000 000040 Apr 24 23:34:08.032218 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:34:08.032227 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:34:08.032234 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:34:08.032246 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:34:08.032260 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Apr 24 23:34:08.032269 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Apr 24 23:34:08.032276 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Apr 24 23:34:08.032285 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Apr 24 23:34:08.032292 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Apr 24 23:34:08.032299 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Apr 24 23:34:08.032305 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Apr 24 23:34:08.032312 kernel: No NUMA configuration found Apr 24 23:34:08.032318 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Apr 24 23:34:08.032325 kernel: NODE_DATA(0) allocated [mem 0x17fff8000-0x17fffdfff] Apr 24 23:34:08.032331 kernel: Zone ranges: Apr 24 23:34:08.032340 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 24 23:34:08.032347 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 24 23:34:08.032354 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Apr 24 23:34:08.032360 kernel: Movable zone start for each node Apr 24 23:34:08.032367 kernel: Early memory node ranges Apr 24 23:34:08.032373 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 24 23:34:08.032380 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Apr 24 23:34:08.032386 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Apr 24 23:34:08.032396 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Apr 24 23:34:08.032411 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 24 23:34:08.032419 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 24 23:34:08.032426 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Apr 24 23:34:08.032434 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 24 23:34:08.032447 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 24 23:34:08.032454 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 24 23:34:08.032461 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 24 23:34:08.032467 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 24 23:34:08.032475 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 24 23:34:08.032490 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 24 23:34:08.032501 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 24 23:34:08.032511 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 24 23:34:08.032522 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 24 23:34:08.032529 kernel: TSC deadline timer available Apr 24 23:34:08.032535 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 24 23:34:08.032548 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 24 23:34:08.032555 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 24 23:34:08.032561 kernel: kvm-guest: setup PV sched yield Apr 24 23:34:08.032571 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 24 23:34:08.032577 kernel: Booting paravirtualized kernel on KVM Apr 24 23:34:08.032584 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 24 23:34:08.032591 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 24 23:34:08.032610 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 24 23:34:08.032750 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 24 23:34:08.032761 kernel: pcpu-alloc: [0] 0 1 Apr 24 23:34:08.032768 kernel: kvm-guest: PV spinlocks enabled Apr 24 23:34:08.032775 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 24 23:34:08.032787 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:34:08.032794 kernel: random: crng init done Apr 24 23:34:08.032800 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 24 23:34:08.032807 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 24 23:34:08.032813 kernel: Fallback order for Node 0: 0 Apr 24 23:34:08.032825 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Apr 24 23:34:08.032832 kernel: Policy zone: Normal Apr 24 23:34:08.032839 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 24 23:34:08.032845 kernel: software IO TLB: area num 2. Apr 24 23:34:08.032855 kernel: Memory: 3966212K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 227300K reserved, 0K cma-reserved) Apr 24 23:34:08.032861 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 24 23:34:08.032868 kernel: ftrace: allocating 37996 entries in 149 pages Apr 24 23:34:08.032875 kernel: ftrace: allocated 149 pages with 4 groups Apr 24 23:34:08.032881 kernel: Dynamic Preempt: voluntary Apr 24 23:34:08.032888 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 24 23:34:08.032895 kernel: rcu: RCU event tracing is enabled. Apr 24 23:34:08.032902 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 24 23:34:08.032909 kernel: Trampoline variant of Tasks RCU enabled. Apr 24 23:34:08.032918 kernel: Rude variant of Tasks RCU enabled. Apr 24 23:34:08.032925 kernel: Tracing variant of Tasks RCU enabled. Apr 24 23:34:08.032932 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 24 23:34:08.032938 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 24 23:34:08.032945 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 24 23:34:08.032952 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 24 23:34:08.032958 kernel: Console: colour VGA+ 80x25 Apr 24 23:34:08.032965 kernel: printk: console [tty0] enabled Apr 24 23:34:08.032971 kernel: printk: console [ttyS0] enabled Apr 24 23:34:08.032980 kernel: ACPI: Core revision 20230628 Apr 24 23:34:08.032991 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 24 23:34:08.033007 kernel: APIC: Switch to symmetric I/O mode setup Apr 24 23:34:08.033018 kernel: x2apic enabled Apr 24 23:34:08.033033 kernel: APIC: Switched APIC routing to: physical x2apic Apr 24 23:34:08.033043 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 24 23:34:08.033050 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 24 23:34:08.033057 kernel: kvm-guest: setup PV IPIs Apr 24 23:34:08.033064 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 24 23:34:08.033070 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 24 23:34:08.033077 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Apr 24 23:34:08.033084 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 24 23:34:08.033094 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 24 23:34:08.033101 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 24 23:34:08.033108 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 24 23:34:08.033115 kernel: Spectre V2 : Mitigation: Retpolines Apr 24 23:34:08.033125 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 24 23:34:08.033132 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 24 23:34:08.033143 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 24 23:34:08.033156 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 24 23:34:08.033167 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 24 23:34:08.033180 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 24 23:34:08.033191 kernel: active return thunk: srso_alias_return_thunk Apr 24 23:34:08.033201 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 24 23:34:08.033208 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 24 23:34:08.033225 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 23:34:08.033232 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 24 23:34:08.033239 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 24 23:34:08.033246 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 24 23:34:08.033253 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 24 23:34:08.033264 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 24 23:34:08.033276 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Apr 24 23:34:08.033288 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Apr 24 23:34:08.033298 kernel: Freeing SMP alternatives memory: 32K Apr 24 23:34:08.033305 kernel: pid_max: default: 32768 minimum: 301 Apr 24 23:34:08.033311 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 24 23:34:08.033318 kernel: landlock: Up and running. Apr 24 23:34:08.033325 kernel: SELinux: Initializing. Apr 24 23:34:08.033331 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 23:34:08.033338 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 23:34:08.033345 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Apr 24 23:34:08.033352 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:34:08.033361 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:34:08.033368 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:34:08.033375 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 24 23:34:08.033381 kernel: ... version: 0 Apr 24 23:34:08.033392 kernel: ... bit width: 48 Apr 24 23:34:08.033398 kernel: ... generic registers: 6 Apr 24 23:34:08.033405 kernel: ... value mask: 0000ffffffffffff Apr 24 23:34:08.033431 kernel: ... max period: 00007fffffffffff Apr 24 23:34:08.033438 kernel: ... fixed-purpose events: 0 Apr 24 23:34:08.033448 kernel: ... event mask: 000000000000003f Apr 24 23:34:08.033455 kernel: signal: max sigframe size: 3376 Apr 24 23:34:08.033461 kernel: rcu: Hierarchical SRCU implementation. Apr 24 23:34:08.033468 kernel: rcu: Max phase no-delay instances is 400. Apr 24 23:34:08.033475 kernel: smp: Bringing up secondary CPUs ... Apr 24 23:34:08.033482 kernel: smpboot: x86: Booting SMP configuration: Apr 24 23:34:08.033488 kernel: .... node #0, CPUs: #1 Apr 24 23:34:08.033495 kernel: smp: Brought up 1 node, 2 CPUs Apr 24 23:34:08.033502 kernel: smpboot: Max logical packages: 1 Apr 24 23:34:08.033508 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Apr 24 23:34:08.033517 kernel: devtmpfs: initialized Apr 24 23:34:08.033524 kernel: x86/mm: Memory block size: 128MB Apr 24 23:34:08.033543 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 24 23:34:08.033571 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 24 23:34:08.033584 kernel: pinctrl core: initialized pinctrl subsystem Apr 24 23:34:08.033596 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 24 23:34:08.033605 kernel: audit: initializing netlink subsys (disabled) Apr 24 23:34:08.033617 kernel: audit: type=2000 audit(1777073646.228:1): state=initialized audit_enabled=0 res=1 Apr 24 23:34:08.033633 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 24 23:34:08.033650 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 24 23:34:08.033662 kernel: cpuidle: using governor menu Apr 24 23:34:08.033687 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 24 23:34:08.033694 kernel: dca service started, version 1.12.1 Apr 24 23:34:08.033701 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 24 23:34:08.033708 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 24 23:34:08.033715 kernel: PCI: Using configuration type 1 for base access Apr 24 23:34:08.033722 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 24 23:34:08.033730 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 24 23:34:08.033742 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 24 23:34:08.033749 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 24 23:34:08.033756 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 24 23:34:08.033762 kernel: ACPI: Added _OSI(Module Device) Apr 24 23:34:08.033769 kernel: ACPI: Added _OSI(Processor Device) Apr 24 23:34:08.033776 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 24 23:34:08.033783 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 24 23:34:08.033789 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 24 23:34:08.033796 kernel: ACPI: Interpreter enabled Apr 24 23:34:08.033805 kernel: ACPI: PM: (supports S0 S3 S5) Apr 24 23:34:08.033812 kernel: ACPI: Using IOAPIC for interrupt routing Apr 24 23:34:08.033819 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 24 23:34:08.033825 kernel: PCI: Using E820 reservations for host bridge windows Apr 24 23:34:08.033832 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 24 23:34:08.033839 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 24 23:34:08.034028 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 24 23:34:08.034167 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 24 23:34:08.034308 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 24 23:34:08.034325 kernel: PCI host bridge to bus 0000:00 Apr 24 23:34:08.034484 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 24 23:34:08.034661 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 24 23:34:08.034830 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 24 23:34:08.035006 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Apr 24 23:34:08.035155 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 24 23:34:08.035276 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Apr 24 23:34:08.035392 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 24 23:34:08.035627 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 24 23:34:08.035824 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 24 23:34:08.035999 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 24 23:34:08.036149 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 24 23:34:08.036382 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 24 23:34:08.036552 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 24 23:34:08.038775 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Apr 24 23:34:08.038920 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Apr 24 23:34:08.039048 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 24 23:34:08.039186 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 24 23:34:08.039324 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 24 23:34:08.039463 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 24 23:34:08.039645 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 24 23:34:08.040847 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 24 23:34:08.040981 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 24 23:34:08.041121 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 24 23:34:08.041285 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 24 23:34:08.041435 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 24 23:34:08.041603 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Apr 24 23:34:08.041780 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Apr 24 23:34:08.042777 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 24 23:34:08.042917 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 24 23:34:08.042928 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 24 23:34:08.042935 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 24 23:34:08.042943 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 24 23:34:08.042954 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 24 23:34:08.042961 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 24 23:34:08.042968 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 24 23:34:08.042975 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 24 23:34:08.042981 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 24 23:34:08.042988 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 24 23:34:08.042995 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 24 23:34:08.043002 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 24 23:34:08.043011 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 24 23:34:08.043018 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 24 23:34:08.043025 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 24 23:34:08.043032 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 24 23:34:08.043039 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 24 23:34:08.043046 kernel: iommu: Default domain type: Translated Apr 24 23:34:08.043053 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 24 23:34:08.043060 kernel: PCI: Using ACPI for IRQ routing Apr 24 23:34:08.043071 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 24 23:34:08.043083 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Apr 24 23:34:08.043099 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Apr 24 23:34:08.043245 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 24 23:34:08.043373 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 24 23:34:08.043508 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 24 23:34:08.043519 kernel: vgaarb: loaded Apr 24 23:34:08.043526 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 24 23:34:08.043533 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 24 23:34:08.043540 kernel: clocksource: Switched to clocksource kvm-clock Apr 24 23:34:08.043551 kernel: VFS: Disk quotas dquot_6.6.0 Apr 24 23:34:08.044785 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 24 23:34:08.044800 kernel: pnp: PnP ACPI init Apr 24 23:34:08.044998 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 24 23:34:08.045014 kernel: pnp: PnP ACPI: found 5 devices Apr 24 23:34:08.045022 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 24 23:34:08.045029 kernel: NET: Registered PF_INET protocol family Apr 24 23:34:08.045037 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 24 23:34:08.045050 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 24 23:34:08.045057 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 24 23:34:08.045064 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 24 23:34:08.045070 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 24 23:34:08.045078 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 24 23:34:08.045085 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 23:34:08.045092 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 23:34:08.045099 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 24 23:34:08.045105 kernel: NET: Registered PF_XDP protocol family Apr 24 23:34:08.045248 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 24 23:34:08.045377 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 24 23:34:08.045756 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 24 23:34:08.045890 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Apr 24 23:34:08.046032 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 24 23:34:08.046169 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Apr 24 23:34:08.046180 kernel: PCI: CLS 0 bytes, default 64 Apr 24 23:34:08.046188 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 24 23:34:08.046201 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Apr 24 23:34:08.046208 kernel: Initialise system trusted keyrings Apr 24 23:34:08.046215 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 24 23:34:08.046222 kernel: Key type asymmetric registered Apr 24 23:34:08.046229 kernel: Asymmetric key parser 'x509' registered Apr 24 23:34:08.046236 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 24 23:34:08.046243 kernel: io scheduler mq-deadline registered Apr 24 23:34:08.046251 kernel: io scheduler kyber registered Apr 24 23:34:08.046260 kernel: io scheduler bfq registered Apr 24 23:34:08.046277 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 24 23:34:08.046290 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 24 23:34:08.046303 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 24 23:34:08.046315 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 24 23:34:08.046327 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 24 23:34:08.046336 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 24 23:34:08.046343 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 24 23:34:08.046350 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 24 23:34:08.046357 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 24 23:34:08.046505 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 24 23:34:08.048846 kernel: rtc_cmos 00:03: registered as rtc0 Apr 24 23:34:08.048997 kernel: rtc_cmos 00:03: setting system clock to 2026-04-24T23:34:07 UTC (1777073647) Apr 24 23:34:08.049121 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 24 23:34:08.049131 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 24 23:34:08.049139 kernel: NET: Registered PF_INET6 protocol family Apr 24 23:34:08.049146 kernel: Segment Routing with IPv6 Apr 24 23:34:08.049154 kernel: In-situ OAM (IOAM) with IPv6 Apr 24 23:34:08.049166 kernel: NET: Registered PF_PACKET protocol family Apr 24 23:34:08.049173 kernel: Key type dns_resolver registered Apr 24 23:34:08.049179 kernel: IPI shorthand broadcast: enabled Apr 24 23:34:08.049187 kernel: sched_clock: Marking stable (928003210, 343000830)->(1405406250, -134402210) Apr 24 23:34:08.049194 kernel: registered taskstats version 1 Apr 24 23:34:08.049201 kernel: Loading compiled-in X.509 certificates Apr 24 23:34:08.049209 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 507f116e6718ec7535b55c873de10edf9b6fe124' Apr 24 23:34:08.049216 kernel: Key type .fscrypt registered Apr 24 23:34:08.049223 kernel: Key type fscrypt-provisioning registered Apr 24 23:34:08.049233 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 24 23:34:08.049240 kernel: ima: Allocated hash algorithm: sha1 Apr 24 23:34:08.049247 kernel: ima: No architecture policies found Apr 24 23:34:08.049255 kernel: clk: Disabling unused clocks Apr 24 23:34:08.049262 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 24 23:34:08.049269 kernel: Write protecting the kernel read-only data: 36864k Apr 24 23:34:08.049276 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 24 23:34:08.049283 kernel: Run /init as init process Apr 24 23:34:08.049291 kernel: with arguments: Apr 24 23:34:08.049301 kernel: /init Apr 24 23:34:08.049308 kernel: with environment: Apr 24 23:34:08.049315 kernel: HOME=/ Apr 24 23:34:08.049322 kernel: TERM=linux Apr 24 23:34:08.049331 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:34:08.049340 systemd[1]: Detected virtualization kvm. Apr 24 23:34:08.049348 systemd[1]: Detected architecture x86-64. Apr 24 23:34:08.049358 systemd[1]: Running in initrd. Apr 24 23:34:08.049365 systemd[1]: No hostname configured, using default hostname. Apr 24 23:34:08.049373 systemd[1]: Hostname set to . Apr 24 23:34:08.049380 systemd[1]: Initializing machine ID from random generator. Apr 24 23:34:08.049388 systemd[1]: Queued start job for default target initrd.target. Apr 24 23:34:08.049396 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:34:08.049416 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:34:08.049576 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 24 23:34:08.049636 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:34:08.049646 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 24 23:34:08.049654 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 24 23:34:08.049663 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 24 23:34:08.049686 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 24 23:34:08.049698 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:34:08.049710 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:34:08.049718 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:34:08.049725 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:34:08.049733 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:34:08.049741 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:34:08.049749 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:34:08.049757 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:34:08.049765 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 23:34:08.049776 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 24 23:34:08.049784 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:34:08.049792 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:34:08.049799 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:34:08.049807 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:34:08.049838 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 24 23:34:08.049846 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:34:08.049854 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 24 23:34:08.049865 systemd[1]: Starting systemd-fsck-usr.service... Apr 24 23:34:08.049874 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:34:08.049881 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:34:08.049889 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:34:08.049919 systemd-journald[178]: Collecting audit messages is disabled. Apr 24 23:34:08.049939 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 24 23:34:08.049950 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:34:08.049958 systemd[1]: Finished systemd-fsck-usr.service. Apr 24 23:34:08.049969 systemd-journald[178]: Journal started Apr 24 23:34:08.049986 systemd-journald[178]: Runtime Journal (/run/log/journal/9ce3a3ae99f84caf9275b7850d74f96f) is 8.0M, max 78.3M, 70.3M free. Apr 24 23:34:08.028182 systemd-modules-load[179]: Inserted module 'overlay' Apr 24 23:34:08.060350 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:34:08.067702 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 24 23:34:08.068930 systemd-modules-load[179]: Inserted module 'br_netfilter' Apr 24 23:34:08.152707 kernel: Bridge firewalling registered Apr 24 23:34:08.153994 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:34:08.155206 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:34:08.163862 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:34:08.166470 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:34:08.169814 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:34:08.174829 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:34:08.186585 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:34:08.211071 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:34:08.213184 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:34:08.221876 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 24 23:34:08.226940 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:34:08.229354 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:34:08.233749 dracut-cmdline[207]: dracut-dracut-053 Apr 24 23:34:08.239031 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:34:08.242341 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:34:08.253969 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:34:08.282434 systemd-resolved[219]: Positive Trust Anchors: Apr 24 23:34:08.283521 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:34:08.283569 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:34:08.291012 systemd-resolved[219]: Defaulting to hostname 'linux'. Apr 24 23:34:08.292935 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:34:08.295857 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:34:08.346860 kernel: SCSI subsystem initialized Apr 24 23:34:08.357701 kernel: Loading iSCSI transport class v2.0-870. Apr 24 23:34:08.370794 kernel: iscsi: registered transport (tcp) Apr 24 23:34:08.393792 kernel: iscsi: registered transport (qla4xxx) Apr 24 23:34:08.393873 kernel: QLogic iSCSI HBA Driver Apr 24 23:34:08.453206 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 24 23:34:08.458818 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 24 23:34:08.493750 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 24 23:34:08.493786 kernel: device-mapper: uevent: version 1.0.3 Apr 24 23:34:08.496137 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 24 23:34:08.541704 kernel: raid6: avx2x4 gen() 33845 MB/s Apr 24 23:34:08.559704 kernel: raid6: avx2x2 gen() 30588 MB/s Apr 24 23:34:08.577984 kernel: raid6: avx2x1 gen() 23450 MB/s Apr 24 23:34:08.578013 kernel: raid6: using algorithm avx2x4 gen() 33845 MB/s Apr 24 23:34:08.602437 kernel: raid6: .... xor() 4853 MB/s, rmw enabled Apr 24 23:34:08.602478 kernel: raid6: using avx2x2 recovery algorithm Apr 24 23:34:08.624704 kernel: xor: automatically using best checksumming function avx Apr 24 23:34:08.764706 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 24 23:34:08.778744 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:34:08.783900 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:34:08.804423 systemd-udevd[395]: Using default interface naming scheme 'v255'. Apr 24 23:34:08.809909 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:34:08.815803 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 24 23:34:08.840391 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Apr 24 23:34:08.876129 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:34:08.882809 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:34:08.959630 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:34:08.972775 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 24 23:34:08.987316 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 24 23:34:08.990447 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:34:08.992249 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:34:08.993874 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:34:09.001145 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 24 23:34:09.025018 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:34:09.289706 kernel: cryptd: max_cpu_qlen set to 1000 Apr 24 23:34:09.295815 kernel: libata version 3.00 loaded. Apr 24 23:34:09.300696 kernel: scsi host0: Virtio SCSI HBA Apr 24 23:34:09.315578 kernel: AVX2 version of gcm_enc/dec engaged. Apr 24 23:34:09.315643 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 24 23:34:09.315734 kernel: AES CTR mode by8 optimization enabled Apr 24 23:34:09.316480 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:34:09.316900 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:34:09.350862 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:34:09.352042 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:34:09.352192 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:34:09.353086 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:34:09.369179 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:34:09.380665 kernel: ahci 0000:00:1f.2: version 3.0 Apr 24 23:34:09.380970 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 24 23:34:09.388128 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 24 23:34:09.388333 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 24 23:34:09.395774 kernel: scsi host1: ahci Apr 24 23:34:09.402137 kernel: scsi host2: ahci Apr 24 23:34:09.406732 kernel: scsi host3: ahci Apr 24 23:34:09.408698 kernel: scsi host4: ahci Apr 24 23:34:09.410699 kernel: scsi host5: ahci Apr 24 23:34:09.411078 kernel: scsi host6: ahci Apr 24 23:34:09.411238 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Apr 24 23:34:09.411253 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Apr 24 23:34:09.411269 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Apr 24 23:34:09.411284 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Apr 24 23:34:09.411294 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Apr 24 23:34:09.411311 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Apr 24 23:34:09.520018 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:34:09.526054 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:34:09.551907 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:34:09.728706 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 24 23:34:09.728785 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 24 23:34:09.728802 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 24 23:34:09.729713 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 24 23:34:09.732754 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 24 23:34:09.737833 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 24 23:34:09.758439 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 24 23:34:09.758947 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Apr 24 23:34:09.787002 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 24 23:34:09.789948 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 24 23:34:09.790145 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 24 23:34:09.800631 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 24 23:34:09.800708 kernel: GPT:9289727 != 167739391 Apr 24 23:34:09.800728 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 24 23:34:09.804632 kernel: GPT:9289727 != 167739391 Apr 24 23:34:09.804685 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 24 23:34:09.808758 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:34:09.811156 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 24 23:34:09.848720 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (442) Apr 24 23:34:09.848972 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 24 23:34:09.860193 kernel: BTRFS: device fsid 077bb4ac-fe88-409a-8f61-fdf28cadf681 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (464) Apr 24 23:34:09.866126 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 24 23:34:09.872577 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 24 23:34:09.878540 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 24 23:34:09.879648 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 24 23:34:09.892807 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 24 23:34:09.897953 disk-uuid[570]: Primary Header is updated. Apr 24 23:34:09.897953 disk-uuid[570]: Secondary Entries is updated. Apr 24 23:34:09.897953 disk-uuid[570]: Secondary Header is updated. Apr 24 23:34:09.903690 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:34:09.910722 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:34:10.916142 disk-uuid[571]: The operation has completed successfully. Apr 24 23:34:10.917362 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:34:10.977455 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 24 23:34:10.977833 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 24 23:34:10.989920 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 24 23:34:10.995750 sh[585]: Success Apr 24 23:34:11.011692 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 24 23:34:11.064642 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 24 23:34:11.067719 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 24 23:34:11.070361 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 24 23:34:11.095317 kernel: BTRFS info (device dm-0): first mount of filesystem 077bb4ac-fe88-409a-8f61-fdf28cadf681 Apr 24 23:34:11.095348 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:34:11.101361 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 24 23:34:11.101390 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 24 23:34:11.106419 kernel: BTRFS info (device dm-0): using free space tree Apr 24 23:34:11.115698 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 24 23:34:11.117209 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 24 23:34:11.118414 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 24 23:34:11.125789 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 24 23:34:11.129005 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 24 23:34:11.147144 kernel: BTRFS info (device sda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:34:11.147174 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:34:11.147194 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:34:11.158786 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 24 23:34:11.158810 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:34:11.172116 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 24 23:34:11.175731 kernel: BTRFS info (device sda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:34:11.183381 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 24 23:34:11.193419 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 24 23:34:11.273611 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:34:11.282945 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:34:11.283041 ignition[683]: Ignition 2.19.0 Apr 24 23:34:11.285756 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:34:11.283048 ignition[683]: Stage: fetch-offline Apr 24 23:34:11.283085 ignition[683]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:34:11.283095 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 23:34:11.283179 ignition[683]: parsed url from cmdline: "" Apr 24 23:34:11.283184 ignition[683]: no config URL provided Apr 24 23:34:11.283190 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:34:11.283199 ignition[683]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:34:11.283205 ignition[683]: failed to fetch config: resource requires networking Apr 24 23:34:11.283655 ignition[683]: Ignition finished successfully Apr 24 23:34:11.308469 systemd-networkd[770]: lo: Link UP Apr 24 23:34:11.308482 systemd-networkd[770]: lo: Gained carrier Apr 24 23:34:11.310190 systemd-networkd[770]: Enumeration completed Apr 24 23:34:11.310273 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:34:11.310818 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:34:11.310822 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:34:11.312389 systemd-networkd[770]: eth0: Link UP Apr 24 23:34:11.312393 systemd-networkd[770]: eth0: Gained carrier Apr 24 23:34:11.312401 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:34:11.313183 systemd[1]: Reached target network.target - Network. Apr 24 23:34:11.320837 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 24 23:34:11.333788 ignition[773]: Ignition 2.19.0 Apr 24 23:34:11.333801 ignition[773]: Stage: fetch Apr 24 23:34:11.333954 ignition[773]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:34:11.333965 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 23:34:11.334040 ignition[773]: parsed url from cmdline: "" Apr 24 23:34:11.334044 ignition[773]: no config URL provided Apr 24 23:34:11.334050 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:34:11.334059 ignition[773]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:34:11.334075 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #1 Apr 24 23:34:11.334219 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 24 23:34:11.534372 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #2 Apr 24 23:34:11.534546 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 24 23:34:11.934904 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #3 Apr 24 23:34:11.935056 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 24 23:34:12.030749 systemd-networkd[770]: eth0: DHCPv4 address 172.238.161.65/24, gateway 172.238.161.1 acquired from 23.213.15.243 Apr 24 23:34:12.443941 systemd-networkd[770]: eth0: Gained IPv6LL Apr 24 23:34:12.736091 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #4 Apr 24 23:34:12.834179 ignition[773]: PUT result: OK Apr 24 23:34:12.834241 ignition[773]: GET http://169.254.169.254/v1/user-data: attempt #1 Apr 24 23:34:13.010350 ignition[773]: GET result: OK Apr 24 23:34:13.010498 ignition[773]: parsing config with SHA512: deb77a6bfc00a41eccb18a98d2b16c813687eab79202a2b5b680b4b945cb2d70d69c2a04c857fb7924016e90035cea3ebc957c8da846cdf749a22e59fd64fd7f Apr 24 23:34:13.014907 unknown[773]: fetched base config from "system" Apr 24 23:34:13.015068 unknown[773]: fetched base config from "system" Apr 24 23:34:13.015076 unknown[773]: fetched user config from "akamai" Apr 24 23:34:13.017107 ignition[773]: fetch: fetch complete Apr 24 23:34:13.017113 ignition[773]: fetch: fetch passed Apr 24 23:34:13.017157 ignition[773]: Ignition finished successfully Apr 24 23:34:13.021233 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 24 23:34:13.027802 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 24 23:34:13.059158 ignition[781]: Ignition 2.19.0 Apr 24 23:34:13.059169 ignition[781]: Stage: kargs Apr 24 23:34:13.059315 ignition[781]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:34:13.061996 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 24 23:34:13.059327 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 23:34:13.060212 ignition[781]: kargs: kargs passed Apr 24 23:34:13.060259 ignition[781]: Ignition finished successfully Apr 24 23:34:13.068845 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 24 23:34:13.084272 ignition[787]: Ignition 2.19.0 Apr 24 23:34:13.084285 ignition[787]: Stage: disks Apr 24 23:34:13.087053 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 24 23:34:13.084439 ignition[787]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:34:13.111280 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 24 23:34:13.084451 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 23:34:13.112811 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 23:34:13.085159 ignition[787]: disks: disks passed Apr 24 23:34:13.114243 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:34:13.085200 ignition[787]: Ignition finished successfully Apr 24 23:34:13.116059 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:34:13.117710 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:34:13.125850 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 24 23:34:13.143714 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 24 23:34:13.147226 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 24 23:34:13.155743 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 24 23:34:13.249695 kernel: EXT4-fs (sda9): mounted filesystem ae73d4a7-3ef8-4c50-8348-4aeb952085ba r/w with ordered data mode. Quota mode: none. Apr 24 23:34:13.250865 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 24 23:34:13.252140 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 24 23:34:13.257739 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:34:13.260890 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 24 23:34:13.262720 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 24 23:34:13.262770 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 24 23:34:13.262793 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:34:13.276659 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 24 23:34:13.293773 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (804) Apr 24 23:34:13.293798 kernel: BTRFS info (device sda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:34:13.293811 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:34:13.293822 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:34:13.293832 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 24 23:34:13.293843 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:34:13.294448 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:34:13.303801 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 24 23:34:13.355481 initrd-setup-root[829]: cut: /sysroot/etc/passwd: No such file or directory Apr 24 23:34:13.364209 initrd-setup-root[836]: cut: /sysroot/etc/group: No such file or directory Apr 24 23:34:13.370324 initrd-setup-root[843]: cut: /sysroot/etc/shadow: No such file or directory Apr 24 23:34:13.375807 initrd-setup-root[850]: cut: /sysroot/etc/gshadow: No such file or directory Apr 24 23:34:13.473044 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 24 23:34:13.478897 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 24 23:34:13.482963 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 24 23:34:13.495938 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 24 23:34:13.499320 kernel: BTRFS info (device sda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:34:13.523312 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 24 23:34:13.530015 ignition[922]: INFO : Ignition 2.19.0 Apr 24 23:34:13.530015 ignition[922]: INFO : Stage: mount Apr 24 23:34:13.532884 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:34:13.532884 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 23:34:13.532884 ignition[922]: INFO : mount: mount passed Apr 24 23:34:13.532884 ignition[922]: INFO : Ignition finished successfully Apr 24 23:34:13.534084 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 24 23:34:13.542785 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 24 23:34:14.255810 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:34:14.279739 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (935) Apr 24 23:34:14.279822 kernel: BTRFS info (device sda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:34:14.285121 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:34:14.285156 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:34:14.295267 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 24 23:34:14.295344 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:34:14.298404 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:34:14.324165 ignition[952]: INFO : Ignition 2.19.0 Apr 24 23:34:14.324165 ignition[952]: INFO : Stage: files Apr 24 23:34:14.325861 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:34:14.325861 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 23:34:14.325861 ignition[952]: DEBUG : files: compiled without relabeling support, skipping Apr 24 23:34:14.328893 ignition[952]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 24 23:34:14.328893 ignition[952]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 24 23:34:14.331515 ignition[952]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 24 23:34:14.331515 ignition[952]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 24 23:34:14.333921 ignition[952]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 24 23:34:14.333921 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:34:14.333921 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 24 23:34:14.331855 unknown[952]: wrote ssh authorized keys file for user: core Apr 24 23:34:14.681945 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 24 23:34:14.794121 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:34:14.795588 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 24 23:34:14.795588 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 24 23:34:14.795588 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:34:14.795588 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:34:14.795588 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:34:14.795588 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:34:14.795588 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:34:14.795588 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:34:14.795588 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:34:14.795588 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:34:14.795588 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 24 23:34:14.795588 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 24 23:34:14.795588 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 24 23:34:14.811583 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 24 23:34:15.222356 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 24 23:34:15.562188 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 24 23:34:15.562188 ignition[952]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 24 23:34:15.565546 ignition[952]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:34:15.589698 ignition[952]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:34:15.589698 ignition[952]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 24 23:34:15.589698 ignition[952]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 24 23:34:15.589698 ignition[952]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 24 23:34:15.589698 ignition[952]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 24 23:34:15.589698 ignition[952]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 24 23:34:15.589698 ignition[952]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 24 23:34:15.589698 ignition[952]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 24 23:34:15.589698 ignition[952]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:34:15.589698 ignition[952]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:34:15.589698 ignition[952]: INFO : files: files passed Apr 24 23:34:15.589698 ignition[952]: INFO : Ignition finished successfully Apr 24 23:34:15.571988 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 24 23:34:15.598878 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 24 23:34:15.600949 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 24 23:34:15.605880 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 24 23:34:15.605988 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 24 23:34:15.620818 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:34:15.622044 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:34:15.623304 initrd-setup-root-after-ignition[980]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:34:15.624280 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:34:15.625993 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 24 23:34:15.630944 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 24 23:34:15.658929 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 24 23:34:15.659068 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 24 23:34:15.661199 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 24 23:34:15.662897 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 24 23:34:15.664901 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 24 23:34:15.670824 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 24 23:34:15.687206 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:34:15.695851 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 24 23:34:15.713945 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:34:15.715380 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:34:15.717901 systemd[1]: Stopped target timers.target - Timer Units. Apr 24 23:34:15.719898 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 24 23:34:15.720058 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:34:15.722086 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 24 23:34:15.723348 systemd[1]: Stopped target basic.target - Basic System. Apr 24 23:34:15.725162 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 24 23:34:15.727246 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:34:15.729106 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 24 23:34:15.731134 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 24 23:34:15.732958 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:34:15.734890 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 24 23:34:15.736748 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 24 23:34:15.738970 systemd[1]: Stopped target swap.target - Swaps. Apr 24 23:34:15.741120 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 24 23:34:15.741277 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:34:15.743526 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:34:15.745232 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:34:15.747056 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 24 23:34:15.747412 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:34:15.749277 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 24 23:34:15.749433 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 24 23:34:15.752199 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 24 23:34:15.752368 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:34:15.753847 systemd[1]: ignition-files.service: Deactivated successfully. Apr 24 23:34:15.754036 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 24 23:34:15.762894 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 24 23:34:15.768046 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 24 23:34:15.769080 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 24 23:34:15.769303 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:34:15.774337 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 24 23:34:15.774755 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:34:15.783086 ignition[1004]: INFO : Ignition 2.19.0 Apr 24 23:34:15.786531 ignition[1004]: INFO : Stage: umount Apr 24 23:34:15.786531 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:34:15.786531 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 23:34:15.784460 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 24 23:34:15.790620 ignition[1004]: INFO : umount: umount passed Apr 24 23:34:15.790620 ignition[1004]: INFO : Ignition finished successfully Apr 24 23:34:15.784605 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 24 23:34:15.791135 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 24 23:34:15.791288 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 24 23:34:15.793854 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 24 23:34:15.793956 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 24 23:34:15.797206 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 24 23:34:15.797261 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 24 23:34:15.798470 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 24 23:34:15.798520 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 24 23:34:15.802309 systemd[1]: Stopped target network.target - Network. Apr 24 23:34:15.804900 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 24 23:34:15.804977 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:34:15.806427 systemd[1]: Stopped target paths.target - Path Units. Apr 24 23:34:15.807285 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 24 23:34:15.815727 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:34:15.836720 systemd[1]: Stopped target slices.target - Slice Units. Apr 24 23:34:15.838401 systemd[1]: Stopped target sockets.target - Socket Units. Apr 24 23:34:15.840287 systemd[1]: iscsid.socket: Deactivated successfully. Apr 24 23:34:15.840353 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:34:15.842079 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 24 23:34:15.842131 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:34:15.843650 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 24 23:34:15.843728 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 24 23:34:15.845418 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 24 23:34:15.845479 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 24 23:34:15.847350 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 24 23:34:15.848938 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 24 23:34:15.851743 systemd-networkd[770]: eth0: DHCPv6 lease lost Apr 24 23:34:15.852496 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 24 23:34:15.854250 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 24 23:34:15.854378 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 24 23:34:15.857190 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 24 23:34:15.857339 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 24 23:34:15.860526 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 24 23:34:15.860952 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 24 23:34:15.864209 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 24 23:34:15.864264 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:34:15.866056 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 24 23:34:15.866111 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 24 23:34:15.872966 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 24 23:34:15.873788 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 24 23:34:15.873846 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:34:15.877050 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 23:34:15.877102 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:34:15.878170 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 24 23:34:15.878238 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 24 23:34:15.879420 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 24 23:34:15.879491 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:34:15.881463 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:34:15.893248 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 24 23:34:15.894748 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:34:15.899008 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 24 23:34:15.899097 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 24 23:34:15.901943 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 24 23:34:15.902003 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:34:15.904397 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 24 23:34:15.904468 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:34:15.906338 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 24 23:34:15.906397 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 24 23:34:15.908372 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:34:15.908425 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:34:15.916819 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 24 23:34:15.919478 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 24 23:34:15.919561 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:34:15.920428 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 24 23:34:15.920482 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:34:15.923948 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 24 23:34:15.924019 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:34:15.925137 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:34:15.925207 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:34:15.928338 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 24 23:34:15.928452 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 24 23:34:15.930869 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 24 23:34:15.931002 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 24 23:34:15.933381 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 24 23:34:15.943944 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 24 23:34:15.950446 systemd[1]: Switching root. Apr 24 23:34:15.993267 systemd-journald[178]: Journal stopped Apr 24 23:34:08.031955 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 24 22:11:38 -00 2026 Apr 24 23:34:08.031977 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:34:08.031986 kernel: BIOS-provided physical RAM map: Apr 24 23:34:08.031992 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Apr 24 23:34:08.031997 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Apr 24 23:34:08.032013 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 24 23:34:08.032020 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Apr 24 23:34:08.032026 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Apr 24 23:34:08.032031 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 24 23:34:08.032037 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 24 23:34:08.032043 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 24 23:34:08.032049 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 24 23:34:08.032055 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Apr 24 23:34:08.032064 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 24 23:34:08.032071 kernel: NX (Execute Disable) protection: active Apr 24 23:34:08.032078 kernel: APIC: Static calls initialized Apr 24 23:34:08.032084 kernel: SMBIOS 2.8 present. Apr 24 23:34:08.032090 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Apr 24 23:34:08.032097 kernel: Hypervisor detected: KVM Apr 24 23:34:08.032105 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 24 23:34:08.032112 kernel: kvm-clock: using sched offset of 5972091770 cycles Apr 24 23:34:08.032119 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 24 23:34:08.032125 kernel: tsc: Detected 2000.000 MHz processor Apr 24 23:34:08.032132 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 24 23:34:08.032139 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 24 23:34:08.032145 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Apr 24 23:34:08.032152 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 24 23:34:08.032158 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 24 23:34:08.032167 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Apr 24 23:34:08.032173 kernel: Using GB pages for direct mapping Apr 24 23:34:08.032180 kernel: ACPI: Early table checksum verification disabled Apr 24 23:34:08.032186 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Apr 24 23:34:08.032193 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:34:08.032199 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:34:08.032205 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:34:08.032212 kernel: ACPI: FACS 0x000000007FFE0000 000040 Apr 24 23:34:08.032218 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:34:08.032227 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:34:08.032234 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:34:08.032246 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:34:08.032260 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Apr 24 23:34:08.032269 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Apr 24 23:34:08.032276 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Apr 24 23:34:08.032285 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Apr 24 23:34:08.032292 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Apr 24 23:34:08.032299 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Apr 24 23:34:08.032305 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Apr 24 23:34:08.032312 kernel: No NUMA configuration found Apr 24 23:34:08.032318 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Apr 24 23:34:08.032325 kernel: NODE_DATA(0) allocated [mem 0x17fff8000-0x17fffdfff] Apr 24 23:34:08.032331 kernel: Zone ranges: Apr 24 23:34:08.032340 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 24 23:34:08.032347 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 24 23:34:08.032354 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Apr 24 23:34:08.032360 kernel: Movable zone start for each node Apr 24 23:34:08.032367 kernel: Early memory node ranges Apr 24 23:34:08.032373 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 24 23:34:08.032380 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Apr 24 23:34:08.032386 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Apr 24 23:34:08.032396 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Apr 24 23:34:08.032411 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 24 23:34:08.032419 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 24 23:34:08.032426 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Apr 24 23:34:08.032434 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 24 23:34:08.032447 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 24 23:34:08.032454 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 24 23:34:08.032461 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 24 23:34:08.032467 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 24 23:34:08.032475 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 24 23:34:08.032490 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 24 23:34:08.032501 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 24 23:34:08.032511 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 24 23:34:08.032522 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 24 23:34:08.032529 kernel: TSC deadline timer available Apr 24 23:34:08.032535 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 24 23:34:08.032548 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 24 23:34:08.032555 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 24 23:34:08.032561 kernel: kvm-guest: setup PV sched yield Apr 24 23:34:08.032571 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 24 23:34:08.032577 kernel: Booting paravirtualized kernel on KVM Apr 24 23:34:08.032584 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 24 23:34:08.032591 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 24 23:34:08.032610 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 24 23:34:08.032750 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 24 23:34:08.032761 kernel: pcpu-alloc: [0] 0 1 Apr 24 23:34:08.032768 kernel: kvm-guest: PV spinlocks enabled Apr 24 23:34:08.032775 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 24 23:34:08.032787 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:34:08.032794 kernel: random: crng init done Apr 24 23:34:08.032800 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 24 23:34:08.032807 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 24 23:34:08.032813 kernel: Fallback order for Node 0: 0 Apr 24 23:34:08.032825 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Apr 24 23:34:08.032832 kernel: Policy zone: Normal Apr 24 23:34:08.032839 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 24 23:34:08.032845 kernel: software IO TLB: area num 2. Apr 24 23:34:08.032855 kernel: Memory: 3966212K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 227300K reserved, 0K cma-reserved) Apr 24 23:34:08.032861 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 24 23:34:08.032868 kernel: ftrace: allocating 37996 entries in 149 pages Apr 24 23:34:08.032875 kernel: ftrace: allocated 149 pages with 4 groups Apr 24 23:34:08.032881 kernel: Dynamic Preempt: voluntary Apr 24 23:34:08.032888 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 24 23:34:08.032895 kernel: rcu: RCU event tracing is enabled. Apr 24 23:34:08.032902 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 24 23:34:08.032909 kernel: Trampoline variant of Tasks RCU enabled. Apr 24 23:34:08.032918 kernel: Rude variant of Tasks RCU enabled. Apr 24 23:34:08.032925 kernel: Tracing variant of Tasks RCU enabled. Apr 24 23:34:08.032932 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 24 23:34:08.032938 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 24 23:34:08.032945 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 24 23:34:08.032952 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 24 23:34:08.032958 kernel: Console: colour VGA+ 80x25 Apr 24 23:34:08.032965 kernel: printk: console [tty0] enabled Apr 24 23:34:08.032971 kernel: printk: console [ttyS0] enabled Apr 24 23:34:08.032980 kernel: ACPI: Core revision 20230628 Apr 24 23:34:08.032991 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 24 23:34:08.033007 kernel: APIC: Switch to symmetric I/O mode setup Apr 24 23:34:08.033018 kernel: x2apic enabled Apr 24 23:34:08.033033 kernel: APIC: Switched APIC routing to: physical x2apic Apr 24 23:34:08.033043 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 24 23:34:08.033050 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 24 23:34:08.033057 kernel: kvm-guest: setup PV IPIs Apr 24 23:34:08.033064 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 24 23:34:08.033070 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 24 23:34:08.033077 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Apr 24 23:34:08.033084 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 24 23:34:08.033094 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 24 23:34:08.033101 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 24 23:34:08.033108 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 24 23:34:08.033115 kernel: Spectre V2 : Mitigation: Retpolines Apr 24 23:34:08.033125 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 24 23:34:08.033132 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 24 23:34:08.033143 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 24 23:34:08.033156 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 24 23:34:08.033167 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 24 23:34:08.033180 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 24 23:34:08.033191 kernel: active return thunk: srso_alias_return_thunk Apr 24 23:34:08.033201 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 24 23:34:08.033208 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 24 23:34:08.033225 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 23:34:08.033232 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 24 23:34:08.033239 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 24 23:34:08.033246 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 24 23:34:08.033253 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 24 23:34:08.033264 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 24 23:34:08.033276 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Apr 24 23:34:08.033288 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Apr 24 23:34:08.033298 kernel: Freeing SMP alternatives memory: 32K Apr 24 23:34:08.033305 kernel: pid_max: default: 32768 minimum: 301 Apr 24 23:34:08.033311 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 24 23:34:08.033318 kernel: landlock: Up and running. Apr 24 23:34:08.033325 kernel: SELinux: Initializing. Apr 24 23:34:08.033331 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 23:34:08.033338 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 23:34:08.033345 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Apr 24 23:34:08.033352 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:34:08.033361 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:34:08.033368 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:34:08.033375 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 24 23:34:08.033381 kernel: ... version: 0 Apr 24 23:34:08.033392 kernel: ... bit width: 48 Apr 24 23:34:08.033398 kernel: ... generic registers: 6 Apr 24 23:34:08.033405 kernel: ... value mask: 0000ffffffffffff Apr 24 23:34:08.033431 kernel: ... max period: 00007fffffffffff Apr 24 23:34:08.033438 kernel: ... fixed-purpose events: 0 Apr 24 23:34:08.033448 kernel: ... event mask: 000000000000003f Apr 24 23:34:08.033455 kernel: signal: max sigframe size: 3376 Apr 24 23:34:08.033461 kernel: rcu: Hierarchical SRCU implementation. Apr 24 23:34:08.033468 kernel: rcu: Max phase no-delay instances is 400. Apr 24 23:34:08.033475 kernel: smp: Bringing up secondary CPUs ... Apr 24 23:34:08.033482 kernel: smpboot: x86: Booting SMP configuration: Apr 24 23:34:08.033488 kernel: .... node #0, CPUs: #1 Apr 24 23:34:08.033495 kernel: smp: Brought up 1 node, 2 CPUs Apr 24 23:34:08.033502 kernel: smpboot: Max logical packages: 1 Apr 24 23:34:08.033508 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Apr 24 23:34:08.033517 kernel: devtmpfs: initialized Apr 24 23:34:08.033524 kernel: x86/mm: Memory block size: 128MB Apr 24 23:34:08.033543 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 24 23:34:08.033571 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 24 23:34:08.033584 kernel: pinctrl core: initialized pinctrl subsystem Apr 24 23:34:08.033596 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 24 23:34:08.033605 kernel: audit: initializing netlink subsys (disabled) Apr 24 23:34:08.033617 kernel: audit: type=2000 audit(1777073646.228:1): state=initialized audit_enabled=0 res=1 Apr 24 23:34:08.033633 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 24 23:34:08.033650 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 24 23:34:08.033662 kernel: cpuidle: using governor menu Apr 24 23:34:08.033687 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 24 23:34:08.033694 kernel: dca service started, version 1.12.1 Apr 24 23:34:08.033701 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 24 23:34:08.033708 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 24 23:34:08.033715 kernel: PCI: Using configuration type 1 for base access Apr 24 23:34:08.033722 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 24 23:34:08.033730 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 24 23:34:08.033742 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 24 23:34:08.033749 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 24 23:34:08.033756 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 24 23:34:08.033762 kernel: ACPI: Added _OSI(Module Device) Apr 24 23:34:08.033769 kernel: ACPI: Added _OSI(Processor Device) Apr 24 23:34:08.033776 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 24 23:34:08.033783 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 24 23:34:08.033789 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 24 23:34:08.033796 kernel: ACPI: Interpreter enabled Apr 24 23:34:08.033805 kernel: ACPI: PM: (supports S0 S3 S5) Apr 24 23:34:08.033812 kernel: ACPI: Using IOAPIC for interrupt routing Apr 24 23:34:08.033819 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 24 23:34:08.033825 kernel: PCI: Using E820 reservations for host bridge windows Apr 24 23:34:08.033832 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 24 23:34:08.033839 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 24 23:34:08.034028 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 24 23:34:08.034167 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 24 23:34:08.034308 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 24 23:34:08.034325 kernel: PCI host bridge to bus 0000:00 Apr 24 23:34:08.034484 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 24 23:34:08.034661 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 24 23:34:08.034830 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 24 23:34:08.035006 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Apr 24 23:34:08.035155 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 24 23:34:08.035276 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Apr 24 23:34:08.035392 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 24 23:34:08.035627 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 24 23:34:08.035824 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 24 23:34:08.035999 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 24 23:34:08.036149 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 24 23:34:08.036382 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 24 23:34:08.036552 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 24 23:34:08.038775 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Apr 24 23:34:08.038920 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Apr 24 23:34:08.039048 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 24 23:34:08.039186 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 24 23:34:08.039324 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 24 23:34:08.039463 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 24 23:34:08.039645 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 24 23:34:08.040847 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 24 23:34:08.040981 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 24 23:34:08.041121 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 24 23:34:08.041285 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 24 23:34:08.041435 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 24 23:34:08.041603 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Apr 24 23:34:08.041780 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Apr 24 23:34:08.042777 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 24 23:34:08.042917 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 24 23:34:08.042928 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 24 23:34:08.042935 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 24 23:34:08.042943 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 24 23:34:08.042954 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 24 23:34:08.042961 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 24 23:34:08.042968 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 24 23:34:08.042975 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 24 23:34:08.042981 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 24 23:34:08.042988 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 24 23:34:08.042995 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 24 23:34:08.043002 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 24 23:34:08.043011 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 24 23:34:08.043018 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 24 23:34:08.043025 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 24 23:34:08.043032 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 24 23:34:08.043039 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 24 23:34:08.043046 kernel: iommu: Default domain type: Translated Apr 24 23:34:08.043053 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 24 23:34:08.043060 kernel: PCI: Using ACPI for IRQ routing Apr 24 23:34:08.043071 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 24 23:34:08.043083 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Apr 24 23:34:08.043099 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Apr 24 23:34:08.043245 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 24 23:34:08.043373 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 24 23:34:08.043508 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 24 23:34:08.043519 kernel: vgaarb: loaded Apr 24 23:34:08.043526 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 24 23:34:08.043533 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 24 23:34:08.043540 kernel: clocksource: Switched to clocksource kvm-clock Apr 24 23:34:08.043551 kernel: VFS: Disk quotas dquot_6.6.0 Apr 24 23:34:08.044785 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 24 23:34:08.044800 kernel: pnp: PnP ACPI init Apr 24 23:34:08.044998 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 24 23:34:08.045014 kernel: pnp: PnP ACPI: found 5 devices Apr 24 23:34:08.045022 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 24 23:34:08.045029 kernel: NET: Registered PF_INET protocol family Apr 24 23:34:08.045037 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 24 23:34:08.045050 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 24 23:34:08.045057 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 24 23:34:08.045064 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 24 23:34:08.045070 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 24 23:34:08.045078 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 24 23:34:08.045085 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 23:34:08.045092 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 23:34:08.045099 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 24 23:34:08.045105 kernel: NET: Registered PF_XDP protocol family Apr 24 23:34:08.045248 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 24 23:34:08.045377 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 24 23:34:08.045756 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 24 23:34:08.045890 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Apr 24 23:34:08.046032 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 24 23:34:08.046169 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Apr 24 23:34:08.046180 kernel: PCI: CLS 0 bytes, default 64 Apr 24 23:34:08.046188 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 24 23:34:08.046201 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Apr 24 23:34:08.046208 kernel: Initialise system trusted keyrings Apr 24 23:34:08.046215 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 24 23:34:08.046222 kernel: Key type asymmetric registered Apr 24 23:34:08.046229 kernel: Asymmetric key parser 'x509' registered Apr 24 23:34:08.046236 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 24 23:34:08.046243 kernel: io scheduler mq-deadline registered Apr 24 23:34:08.046251 kernel: io scheduler kyber registered Apr 24 23:34:08.046260 kernel: io scheduler bfq registered Apr 24 23:34:08.046277 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 24 23:34:08.046290 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 24 23:34:08.046303 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 24 23:34:08.046315 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 24 23:34:08.046327 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 24 23:34:08.046336 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 24 23:34:08.046343 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 24 23:34:08.046350 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 24 23:34:08.046357 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 24 23:34:08.046505 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 24 23:34:08.048846 kernel: rtc_cmos 00:03: registered as rtc0 Apr 24 23:34:08.048997 kernel: rtc_cmos 00:03: setting system clock to 2026-04-24T23:34:07 UTC (1777073647) Apr 24 23:34:08.049121 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 24 23:34:08.049131 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 24 23:34:08.049139 kernel: NET: Registered PF_INET6 protocol family Apr 24 23:34:08.049146 kernel: Segment Routing with IPv6 Apr 24 23:34:08.049154 kernel: In-situ OAM (IOAM) with IPv6 Apr 24 23:34:08.049166 kernel: NET: Registered PF_PACKET protocol family Apr 24 23:34:08.049173 kernel: Key type dns_resolver registered Apr 24 23:34:08.049179 kernel: IPI shorthand broadcast: enabled Apr 24 23:34:08.049187 kernel: sched_clock: Marking stable (928003210, 343000830)->(1405406250, -134402210) Apr 24 23:34:08.049194 kernel: registered taskstats version 1 Apr 24 23:34:08.049201 kernel: Loading compiled-in X.509 certificates Apr 24 23:34:08.049209 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 507f116e6718ec7535b55c873de10edf9b6fe124' Apr 24 23:34:08.049216 kernel: Key type .fscrypt registered Apr 24 23:34:08.049223 kernel: Key type fscrypt-provisioning registered Apr 24 23:34:08.049233 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 24 23:34:08.049240 kernel: ima: Allocated hash algorithm: sha1 Apr 24 23:34:08.049247 kernel: ima: No architecture policies found Apr 24 23:34:08.049255 kernel: clk: Disabling unused clocks Apr 24 23:34:08.049262 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 24 23:34:08.049269 kernel: Write protecting the kernel read-only data: 36864k Apr 24 23:34:08.049276 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 24 23:34:08.049283 kernel: Run /init as init process Apr 24 23:34:08.049291 kernel: with arguments: Apr 24 23:34:08.049301 kernel: /init Apr 24 23:34:08.049308 kernel: with environment: Apr 24 23:34:08.049315 kernel: HOME=/ Apr 24 23:34:08.049322 kernel: TERM=linux Apr 24 23:34:08.049331 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:34:08.049340 systemd[1]: Detected virtualization kvm. Apr 24 23:34:08.049348 systemd[1]: Detected architecture x86-64. Apr 24 23:34:08.049358 systemd[1]: Running in initrd. Apr 24 23:34:08.049365 systemd[1]: No hostname configured, using default hostname. Apr 24 23:34:08.049373 systemd[1]: Hostname set to . Apr 24 23:34:08.049380 systemd[1]: Initializing machine ID from random generator. Apr 24 23:34:08.049388 systemd[1]: Queued start job for default target initrd.target. Apr 24 23:34:08.049396 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:34:08.049416 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:34:08.049576 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 24 23:34:08.049636 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:34:08.049646 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 24 23:34:08.049654 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 24 23:34:08.049663 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 24 23:34:08.049686 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 24 23:34:08.049698 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:34:08.049710 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:34:08.049718 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:34:08.049725 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:34:08.049733 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:34:08.049741 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:34:08.049749 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:34:08.049757 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:34:08.049765 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 23:34:08.049776 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 24 23:34:08.049784 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:34:08.049792 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:34:08.049799 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:34:08.049807 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:34:08.049838 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 24 23:34:08.049846 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:34:08.049854 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 24 23:34:08.049865 systemd[1]: Starting systemd-fsck-usr.service... Apr 24 23:34:08.049874 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:34:08.049881 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:34:08.049889 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:34:08.049919 systemd-journald[178]: Collecting audit messages is disabled. Apr 24 23:34:08.049939 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 24 23:34:08.049950 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:34:08.049958 systemd[1]: Finished systemd-fsck-usr.service. Apr 24 23:34:08.049969 systemd-journald[178]: Journal started Apr 24 23:34:08.049986 systemd-journald[178]: Runtime Journal (/run/log/journal/9ce3a3ae99f84caf9275b7850d74f96f) is 8.0M, max 78.3M, 70.3M free. Apr 24 23:34:08.028182 systemd-modules-load[179]: Inserted module 'overlay' Apr 24 23:34:08.060350 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:34:08.067702 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 24 23:34:08.068930 systemd-modules-load[179]: Inserted module 'br_netfilter' Apr 24 23:34:08.152707 kernel: Bridge firewalling registered Apr 24 23:34:08.153994 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:34:08.155206 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:34:08.163862 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:34:08.166470 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:34:08.169814 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:34:08.174829 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:34:08.186585 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:34:08.211071 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:34:08.213184 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:34:08.221876 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 24 23:34:08.226940 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:34:08.229354 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:34:08.233749 dracut-cmdline[207]: dracut-dracut-053 Apr 24 23:34:08.239031 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:34:08.242341 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:34:08.253969 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:34:08.282434 systemd-resolved[219]: Positive Trust Anchors: Apr 24 23:34:08.283521 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:34:08.283569 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:34:08.291012 systemd-resolved[219]: Defaulting to hostname 'linux'. Apr 24 23:34:08.292935 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:34:08.295857 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:34:08.346860 kernel: SCSI subsystem initialized Apr 24 23:34:08.357701 kernel: Loading iSCSI transport class v2.0-870. Apr 24 23:34:08.370794 kernel: iscsi: registered transport (tcp) Apr 24 23:34:08.393792 kernel: iscsi: registered transport (qla4xxx) Apr 24 23:34:08.393873 kernel: QLogic iSCSI HBA Driver Apr 24 23:34:08.453206 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 24 23:34:08.458818 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 24 23:34:08.493750 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 24 23:34:08.493786 kernel: device-mapper: uevent: version 1.0.3 Apr 24 23:34:08.496137 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 24 23:34:08.541704 kernel: raid6: avx2x4 gen() 33845 MB/s Apr 24 23:34:08.559704 kernel: raid6: avx2x2 gen() 30588 MB/s Apr 24 23:34:08.577984 kernel: raid6: avx2x1 gen() 23450 MB/s Apr 24 23:34:08.578013 kernel: raid6: using algorithm avx2x4 gen() 33845 MB/s Apr 24 23:34:08.602437 kernel: raid6: .... xor() 4853 MB/s, rmw enabled Apr 24 23:34:08.602478 kernel: raid6: using avx2x2 recovery algorithm Apr 24 23:34:08.624704 kernel: xor: automatically using best checksumming function avx Apr 24 23:34:08.764706 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 24 23:34:08.778744 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:34:08.783900 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:34:08.804423 systemd-udevd[395]: Using default interface naming scheme 'v255'. Apr 24 23:34:08.809909 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:34:08.815803 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 24 23:34:08.840391 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Apr 24 23:34:08.876129 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:34:08.882809 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:34:08.959630 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:34:08.972775 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 24 23:34:08.987316 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 24 23:34:08.990447 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:34:08.992249 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:34:08.993874 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:34:09.001145 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 24 23:34:09.025018 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:34:09.289706 kernel: cryptd: max_cpu_qlen set to 1000 Apr 24 23:34:09.295815 kernel: libata version 3.00 loaded. Apr 24 23:34:09.300696 kernel: scsi host0: Virtio SCSI HBA Apr 24 23:34:09.315578 kernel: AVX2 version of gcm_enc/dec engaged. Apr 24 23:34:09.315643 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 24 23:34:09.315734 kernel: AES CTR mode by8 optimization enabled Apr 24 23:34:09.316480 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:34:09.316900 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:34:09.350862 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:34:09.352042 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:34:09.352192 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:34:09.353086 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:34:09.369179 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:34:09.380665 kernel: ahci 0000:00:1f.2: version 3.0 Apr 24 23:34:09.380970 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 24 23:34:09.388128 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 24 23:34:09.388333 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 24 23:34:09.395774 kernel: scsi host1: ahci Apr 24 23:34:09.402137 kernel: scsi host2: ahci Apr 24 23:34:09.406732 kernel: scsi host3: ahci Apr 24 23:34:09.408698 kernel: scsi host4: ahci Apr 24 23:34:09.410699 kernel: scsi host5: ahci Apr 24 23:34:09.411078 kernel: scsi host6: ahci Apr 24 23:34:09.411238 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Apr 24 23:34:09.411253 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Apr 24 23:34:09.411269 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Apr 24 23:34:09.411284 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Apr 24 23:34:09.411294 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Apr 24 23:34:09.411311 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Apr 24 23:34:09.520018 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:34:09.526054 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:34:09.551907 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:34:09.728706 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 24 23:34:09.728785 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 24 23:34:09.728802 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 24 23:34:09.729713 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 24 23:34:09.732754 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 24 23:34:09.737833 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 24 23:34:09.758439 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 24 23:34:09.758947 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Apr 24 23:34:09.787002 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 24 23:34:09.789948 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 24 23:34:09.790145 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 24 23:34:09.800631 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 24 23:34:09.800708 kernel: GPT:9289727 != 167739391 Apr 24 23:34:09.800728 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 24 23:34:09.804632 kernel: GPT:9289727 != 167739391 Apr 24 23:34:09.804685 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 24 23:34:09.808758 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:34:09.811156 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 24 23:34:09.848720 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (442) Apr 24 23:34:09.848972 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 24 23:34:09.860193 kernel: BTRFS: device fsid 077bb4ac-fe88-409a-8f61-fdf28cadf681 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (464) Apr 24 23:34:09.866126 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 24 23:34:09.872577 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 24 23:34:09.878540 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 24 23:34:09.879648 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 24 23:34:09.892807 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 24 23:34:09.897953 disk-uuid[570]: Primary Header is updated. Apr 24 23:34:09.897953 disk-uuid[570]: Secondary Entries is updated. Apr 24 23:34:09.897953 disk-uuid[570]: Secondary Header is updated. Apr 24 23:34:09.903690 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:34:09.910722 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:34:10.916142 disk-uuid[571]: The operation has completed successfully. Apr 24 23:34:10.917362 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:34:10.977455 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 24 23:34:10.977833 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 24 23:34:10.989920 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 24 23:34:10.995750 sh[585]: Success Apr 24 23:34:11.011692 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 24 23:34:11.064642 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 24 23:34:11.067719 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 24 23:34:11.070361 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 24 23:34:11.095317 kernel: BTRFS info (device dm-0): first mount of filesystem 077bb4ac-fe88-409a-8f61-fdf28cadf681 Apr 24 23:34:11.095348 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:34:11.101361 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 24 23:34:11.101390 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 24 23:34:11.106419 kernel: BTRFS info (device dm-0): using free space tree Apr 24 23:34:11.115698 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 24 23:34:11.117209 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 24 23:34:11.118414 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 24 23:34:11.125789 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 24 23:34:11.129005 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 24 23:34:11.147144 kernel: BTRFS info (device sda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:34:11.147174 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:34:11.147194 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:34:11.158786 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 24 23:34:11.158810 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:34:11.172116 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 24 23:34:11.175731 kernel: BTRFS info (device sda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:34:11.183381 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 24 23:34:11.193419 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 24 23:34:11.273611 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:34:11.282945 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:34:11.283041 ignition[683]: Ignition 2.19.0 Apr 24 23:34:11.285756 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:34:11.283048 ignition[683]: Stage: fetch-offline Apr 24 23:34:11.283085 ignition[683]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:34:11.283095 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 23:34:11.283179 ignition[683]: parsed url from cmdline: "" Apr 24 23:34:11.283184 ignition[683]: no config URL provided Apr 24 23:34:11.283190 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:34:11.283199 ignition[683]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:34:11.283205 ignition[683]: failed to fetch config: resource requires networking Apr 24 23:34:11.283655 ignition[683]: Ignition finished successfully Apr 24 23:34:11.308469 systemd-networkd[770]: lo: Link UP Apr 24 23:34:11.308482 systemd-networkd[770]: lo: Gained carrier Apr 24 23:34:11.310190 systemd-networkd[770]: Enumeration completed Apr 24 23:34:11.310273 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:34:11.310818 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:34:11.310822 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:34:11.312389 systemd-networkd[770]: eth0: Link UP Apr 24 23:34:11.312393 systemd-networkd[770]: eth0: Gained carrier Apr 24 23:34:11.312401 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:34:11.313183 systemd[1]: Reached target network.target - Network. Apr 24 23:34:11.320837 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 24 23:34:11.333788 ignition[773]: Ignition 2.19.0 Apr 24 23:34:11.333801 ignition[773]: Stage: fetch Apr 24 23:34:11.333954 ignition[773]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:34:11.333965 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 23:34:11.334040 ignition[773]: parsed url from cmdline: "" Apr 24 23:34:11.334044 ignition[773]: no config URL provided Apr 24 23:34:11.334050 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:34:11.334059 ignition[773]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:34:11.334075 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #1 Apr 24 23:34:11.334219 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 24 23:34:11.534372 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #2 Apr 24 23:34:11.534546 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 24 23:34:11.934904 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #3 Apr 24 23:34:11.935056 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 24 23:34:12.030749 systemd-networkd[770]: eth0: DHCPv4 address 172.238.161.65/24, gateway 172.238.161.1 acquired from 23.213.15.243 Apr 24 23:34:12.443941 systemd-networkd[770]: eth0: Gained IPv6LL Apr 24 23:34:12.736091 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #4 Apr 24 23:34:12.834179 ignition[773]: PUT result: OK Apr 24 23:34:12.834241 ignition[773]: GET http://169.254.169.254/v1/user-data: attempt #1 Apr 24 23:34:13.010350 ignition[773]: GET result: OK Apr 24 23:34:13.010498 ignition[773]: parsing config with SHA512: deb77a6bfc00a41eccb18a98d2b16c813687eab79202a2b5b680b4b945cb2d70d69c2a04c857fb7924016e90035cea3ebc957c8da846cdf749a22e59fd64fd7f Apr 24 23:34:13.014907 unknown[773]: fetched base config from "system" Apr 24 23:34:13.015068 unknown[773]: fetched base config from "system" Apr 24 23:34:13.015076 unknown[773]: fetched user config from "akamai" Apr 24 23:34:13.017107 ignition[773]: fetch: fetch complete Apr 24 23:34:13.017113 ignition[773]: fetch: fetch passed Apr 24 23:34:13.017157 ignition[773]: Ignition finished successfully Apr 24 23:34:13.021233 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 24 23:34:13.027802 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 24 23:34:13.059158 ignition[781]: Ignition 2.19.0 Apr 24 23:34:13.059169 ignition[781]: Stage: kargs Apr 24 23:34:13.059315 ignition[781]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:34:13.061996 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 24 23:34:13.059327 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 23:34:13.060212 ignition[781]: kargs: kargs passed Apr 24 23:34:13.060259 ignition[781]: Ignition finished successfully Apr 24 23:34:13.068845 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 24 23:34:13.084272 ignition[787]: Ignition 2.19.0 Apr 24 23:34:13.084285 ignition[787]: Stage: disks Apr 24 23:34:13.087053 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 24 23:34:13.084439 ignition[787]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:34:13.111280 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 24 23:34:13.084451 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 23:34:13.112811 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 23:34:13.085159 ignition[787]: disks: disks passed Apr 24 23:34:13.114243 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:34:13.085200 ignition[787]: Ignition finished successfully Apr 24 23:34:13.116059 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:34:13.117710 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:34:13.125850 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 24 23:34:13.143714 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 24 23:34:13.147226 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 24 23:34:13.155743 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 24 23:34:13.249695 kernel: EXT4-fs (sda9): mounted filesystem ae73d4a7-3ef8-4c50-8348-4aeb952085ba r/w with ordered data mode. Quota mode: none. Apr 24 23:34:13.250865 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 24 23:34:13.252140 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 24 23:34:13.257739 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:34:13.260890 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 24 23:34:13.262720 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 24 23:34:13.262770 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 24 23:34:13.262793 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:34:13.276659 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 24 23:34:13.293773 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (804) Apr 24 23:34:13.293798 kernel: BTRFS info (device sda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:34:13.293811 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:34:13.293822 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:34:13.293832 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 24 23:34:13.293843 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:34:13.294448 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:34:13.303801 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 24 23:34:13.355481 initrd-setup-root[829]: cut: /sysroot/etc/passwd: No such file or directory Apr 24 23:34:13.364209 initrd-setup-root[836]: cut: /sysroot/etc/group: No such file or directory Apr 24 23:34:13.370324 initrd-setup-root[843]: cut: /sysroot/etc/shadow: No such file or directory Apr 24 23:34:13.375807 initrd-setup-root[850]: cut: /sysroot/etc/gshadow: No such file or directory Apr 24 23:34:13.473044 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 24 23:34:13.478897 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 24 23:34:13.482963 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 24 23:34:13.495938 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 24 23:34:13.499320 kernel: BTRFS info (device sda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:34:13.523312 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 24 23:34:13.530015 ignition[922]: INFO : Ignition 2.19.0 Apr 24 23:34:13.530015 ignition[922]: INFO : Stage: mount Apr 24 23:34:13.532884 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:34:13.532884 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 23:34:13.532884 ignition[922]: INFO : mount: mount passed Apr 24 23:34:13.532884 ignition[922]: INFO : Ignition finished successfully Apr 24 23:34:13.534084 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 24 23:34:13.542785 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 24 23:34:14.255810 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:34:14.279739 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (935) Apr 24 23:34:14.279822 kernel: BTRFS info (device sda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:34:14.285121 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:34:14.285156 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:34:14.295267 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 24 23:34:14.295344 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:34:14.298404 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:34:14.324165 ignition[952]: INFO : Ignition 2.19.0 Apr 24 23:34:14.324165 ignition[952]: INFO : Stage: files Apr 24 23:34:14.325861 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:34:14.325861 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 23:34:14.325861 ignition[952]: DEBUG : files: compiled without relabeling support, skipping Apr 24 23:34:14.328893 ignition[952]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 24 23:34:14.328893 ignition[952]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 24 23:34:14.331515 ignition[952]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 24 23:34:14.331515 ignition[952]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 24 23:34:14.333921 ignition[952]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 24 23:34:14.333921 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:34:14.333921 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 24 23:34:14.331855 unknown[952]: wrote ssh authorized keys file for user: core Apr 24 23:34:14.681945 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 24 23:34:14.794121 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:34:14.795588 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 24 23:34:14.795588 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 24 23:34:14.795588 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:34:14.795588 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:34:14.795588 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:34:14.795588 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:34:14.795588 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:34:14.795588 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:34:14.795588 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:34:14.795588 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:34:14.795588 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 24 23:34:14.795588 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 24 23:34:14.795588 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 24 23:34:14.811583 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 24 23:34:15.222356 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 24 23:34:15.562188 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 24 23:34:15.562188 ignition[952]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 24 23:34:15.565546 ignition[952]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:34:15.589698 ignition[952]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:34:15.589698 ignition[952]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 24 23:34:15.589698 ignition[952]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 24 23:34:15.589698 ignition[952]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 24 23:34:15.589698 ignition[952]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 24 23:34:15.589698 ignition[952]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 24 23:34:15.589698 ignition[952]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 24 23:34:15.589698 ignition[952]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 24 23:34:15.589698 ignition[952]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:34:15.589698 ignition[952]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:34:15.589698 ignition[952]: INFO : files: files passed Apr 24 23:34:15.589698 ignition[952]: INFO : Ignition finished successfully Apr 24 23:34:15.571988 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 24 23:34:15.598878 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 24 23:34:15.600949 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 24 23:34:15.605880 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 24 23:34:15.605988 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 24 23:34:15.620818 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:34:15.622044 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:34:15.623304 initrd-setup-root-after-ignition[980]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:34:15.624280 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:34:15.625993 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 24 23:34:15.630944 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 24 23:34:15.658929 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 24 23:34:15.659068 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 24 23:34:15.661199 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 24 23:34:15.662897 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 24 23:34:15.664901 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 24 23:34:15.670824 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 24 23:34:15.687206 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:34:15.695851 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 24 23:34:15.713945 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:34:15.715380 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:34:15.717901 systemd[1]: Stopped target timers.target - Timer Units. Apr 24 23:34:15.719898 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 24 23:34:15.720058 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:34:15.722086 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 24 23:34:15.723348 systemd[1]: Stopped target basic.target - Basic System. Apr 24 23:34:15.725162 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 24 23:34:15.727246 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:34:15.729106 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 24 23:34:15.731134 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 24 23:34:15.732958 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:34:15.734890 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 24 23:34:15.736748 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 24 23:34:15.738970 systemd[1]: Stopped target swap.target - Swaps. Apr 24 23:34:15.741120 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 24 23:34:15.741277 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:34:15.743526 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:34:15.745232 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:34:15.747056 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 24 23:34:15.747412 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:34:15.749277 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 24 23:34:15.749433 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 24 23:34:15.752199 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 24 23:34:15.752368 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:34:15.753847 systemd[1]: ignition-files.service: Deactivated successfully. Apr 24 23:34:15.754036 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 24 23:34:15.762894 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 24 23:34:15.768046 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 24 23:34:15.769080 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 24 23:34:15.769303 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:34:15.774337 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 24 23:34:15.774755 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:34:15.783086 ignition[1004]: INFO : Ignition 2.19.0 Apr 24 23:34:15.786531 ignition[1004]: INFO : Stage: umount Apr 24 23:34:15.786531 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:34:15.786531 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 23:34:15.784460 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 24 23:34:15.790620 ignition[1004]: INFO : umount: umount passed Apr 24 23:34:15.790620 ignition[1004]: INFO : Ignition finished successfully Apr 24 23:34:15.784605 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 24 23:34:15.791135 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 24 23:34:15.791288 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 24 23:34:15.793854 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 24 23:34:15.793956 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 24 23:34:15.797206 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 24 23:34:15.797261 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 24 23:34:15.798470 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 24 23:34:15.798520 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 24 23:34:15.802309 systemd[1]: Stopped target network.target - Network. Apr 24 23:34:15.804900 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 24 23:34:15.804977 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:34:15.806427 systemd[1]: Stopped target paths.target - Path Units. Apr 24 23:34:15.807285 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 24 23:34:15.815727 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:34:15.836720 systemd[1]: Stopped target slices.target - Slice Units. Apr 24 23:34:15.838401 systemd[1]: Stopped target sockets.target - Socket Units. Apr 24 23:34:15.840287 systemd[1]: iscsid.socket: Deactivated successfully. Apr 24 23:34:15.840353 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:34:15.842079 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 24 23:34:15.842131 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:34:15.843650 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 24 23:34:15.843728 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 24 23:34:15.845418 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 24 23:34:15.845479 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 24 23:34:15.847350 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 24 23:34:15.848938 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 24 23:34:15.851743 systemd-networkd[770]: eth0: DHCPv6 lease lost Apr 24 23:34:15.852496 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 24 23:34:15.854250 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 24 23:34:15.854378 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 24 23:34:15.857190 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 24 23:34:15.857339 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 24 23:34:15.860526 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 24 23:34:15.860952 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 24 23:34:15.864209 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 24 23:34:15.864264 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:34:15.866056 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 24 23:34:15.866111 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 24 23:34:15.872966 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 24 23:34:15.873788 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 24 23:34:15.873846 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:34:15.877050 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 23:34:15.877102 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:34:15.878170 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 24 23:34:15.878238 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 24 23:34:15.879420 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 24 23:34:15.879491 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:34:15.881463 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:34:15.893248 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 24 23:34:15.894748 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:34:15.899008 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 24 23:34:15.899097 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 24 23:34:15.901943 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 24 23:34:15.902003 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:34:15.904397 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 24 23:34:15.904468 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:34:15.906338 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 24 23:34:15.906397 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 24 23:34:15.908372 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:34:15.908425 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:34:15.916819 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 24 23:34:15.919478 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 24 23:34:15.919561 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:34:15.920428 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 24 23:34:15.920482 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:34:15.923948 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 24 23:34:15.924019 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:34:15.925137 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:34:15.925207 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:34:15.928338 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 24 23:34:15.928452 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 24 23:34:15.930869 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 24 23:34:15.931002 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 24 23:34:15.933381 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 24 23:34:15.943944 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 24 23:34:15.950446 systemd[1]: Switching root. Apr 24 23:34:15.993267 systemd-journald[178]: Journal stopped Apr 24 23:34:17.272822 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Apr 24 23:34:17.272854 kernel: SELinux: policy capability network_peer_controls=1 Apr 24 23:34:17.272866 kernel: SELinux: policy capability open_perms=1 Apr 24 23:34:17.272875 kernel: SELinux: policy capability extended_socket_class=1 Apr 24 23:34:17.272888 kernel: SELinux: policy capability always_check_network=0 Apr 24 23:34:17.272897 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 24 23:34:17.272907 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 24 23:34:17.272917 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 24 23:34:17.272926 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 24 23:34:17.272936 kernel: audit: type=1403 audit(1777073656.151:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 24 23:34:17.272946 systemd[1]: Successfully loaded SELinux policy in 59.626ms. Apr 24 23:34:17.272959 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.340ms. Apr 24 23:34:17.272973 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:34:17.272984 systemd[1]: Detected virtualization kvm. Apr 24 23:34:17.272995 systemd[1]: Detected architecture x86-64. Apr 24 23:34:17.273005 systemd[1]: Detected first boot. Apr 24 23:34:17.273018 systemd[1]: Initializing machine ID from random generator. Apr 24 23:34:17.273028 zram_generator::config[1046]: No configuration found. Apr 24 23:34:17.273039 systemd[1]: Populated /etc with preset unit settings. Apr 24 23:34:17.273049 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 24 23:34:17.273059 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 24 23:34:17.273070 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 24 23:34:17.273080 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 24 23:34:17.273093 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 24 23:34:17.273104 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 24 23:34:17.273114 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 24 23:34:17.273124 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 24 23:34:17.273135 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 24 23:34:17.273145 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 24 23:34:17.273155 systemd[1]: Created slice user.slice - User and Session Slice. Apr 24 23:34:17.273168 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:34:17.273179 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:34:17.273197 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 24 23:34:17.273214 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 24 23:34:17.273227 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 24 23:34:17.273237 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:34:17.273247 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 24 23:34:17.273258 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:34:17.273271 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 24 23:34:17.273282 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 24 23:34:17.273296 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 24 23:34:17.273307 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 24 23:34:17.273317 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:34:17.273328 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:34:17.273338 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:34:17.273348 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:34:17.273361 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 24 23:34:17.273372 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 24 23:34:17.273383 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:34:17.273393 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:34:17.273403 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:34:17.273416 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 24 23:34:17.273427 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 24 23:34:17.273437 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 24 23:34:17.273448 systemd[1]: Mounting media.mount - External Media Directory... Apr 24 23:34:17.273459 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:34:17.273469 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 24 23:34:17.273479 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 24 23:34:17.273490 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 24 23:34:17.273503 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 24 23:34:17.273513 systemd[1]: Reached target machines.target - Containers. Apr 24 23:34:17.273692 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 24 23:34:17.273704 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:34:17.273714 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:34:17.273725 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 24 23:34:17.273735 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:34:17.273746 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:34:17.273759 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:34:17.273770 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 24 23:34:17.273780 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:34:17.273791 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 24 23:34:17.273801 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 24 23:34:17.273817 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 24 23:34:17.273834 kernel: fuse: init (API version 7.39) Apr 24 23:34:17.273850 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 24 23:34:17.273871 systemd[1]: Stopped systemd-fsck-usr.service. Apr 24 23:34:17.273883 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:34:17.273894 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:34:17.273904 kernel: ACPI: bus type drm_connector registered Apr 24 23:34:17.273914 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 24 23:34:17.273924 kernel: loop: module loaded Apr 24 23:34:17.273934 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 24 23:34:17.273966 systemd-journald[1122]: Collecting audit messages is disabled. Apr 24 23:34:17.273992 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:34:17.274003 systemd[1]: verity-setup.service: Deactivated successfully. Apr 24 23:34:17.274014 systemd[1]: Stopped verity-setup.service. Apr 24 23:34:17.274025 systemd-journald[1122]: Journal started Apr 24 23:34:17.274047 systemd-journald[1122]: Runtime Journal (/run/log/journal/fe46ca9b42ce4e38ab1f9b0e34719381) is 8.0M, max 78.3M, 70.3M free. Apr 24 23:34:16.848147 systemd[1]: Queued start job for default target multi-user.target. Apr 24 23:34:16.871933 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 24 23:34:16.872512 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 24 23:34:17.279743 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:34:17.287689 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:34:17.290331 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 24 23:34:17.291207 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 24 23:34:17.292114 systemd[1]: Mounted media.mount - External Media Directory. Apr 24 23:34:17.292988 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 24 23:34:17.293870 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 24 23:34:17.294846 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 24 23:34:17.295857 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 24 23:34:17.297169 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:34:17.298531 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 24 23:34:17.298853 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 24 23:34:17.300320 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:34:17.300493 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:34:17.302018 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:34:17.302207 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:34:17.303441 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:34:17.303690 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:34:17.305115 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 24 23:34:17.305360 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 24 23:34:17.306487 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:34:17.306845 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:34:17.308048 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:34:17.309451 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 23:34:17.311097 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 24 23:34:17.328147 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 24 23:34:17.359529 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 24 23:34:17.367754 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 24 23:34:17.370744 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 24 23:34:17.370781 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:34:17.372463 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 24 23:34:17.379852 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 24 23:34:17.382268 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 24 23:34:17.384539 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:34:17.389772 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 24 23:34:17.391831 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 24 23:34:17.392856 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:34:17.398248 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 24 23:34:17.399262 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:34:17.406871 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:34:17.413798 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 24 23:34:17.418784 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:34:17.423075 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 24 23:34:17.427818 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 24 23:34:17.429127 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 24 23:34:17.437904 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:34:17.449835 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 24 23:34:17.459871 systemd-journald[1122]: Time spent on flushing to /var/log/journal/fe46ca9b42ce4e38ab1f9b0e34719381 is 75.253ms for 979 entries. Apr 24 23:34:17.459871 systemd-journald[1122]: System Journal (/var/log/journal/fe46ca9b42ce4e38ab1f9b0e34719381) is 8.0M, max 195.6M, 187.6M free. Apr 24 23:34:17.571471 systemd-journald[1122]: Received client request to flush runtime journal. Apr 24 23:34:17.575182 kernel: loop0: detected capacity change from 0 to 8 Apr 24 23:34:17.575216 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 24 23:34:17.575230 kernel: loop1: detected capacity change from 0 to 217752 Apr 24 23:34:17.462925 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 24 23:34:17.468706 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 24 23:34:17.482487 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 24 23:34:17.511618 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Apr 24 23:34:17.511632 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Apr 24 23:34:17.512621 udevadm[1173]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 24 23:34:17.526468 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:34:17.540222 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:34:17.551931 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 24 23:34:17.559905 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 24 23:34:17.561024 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 24 23:34:17.586349 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 24 23:34:17.614755 kernel: loop2: detected capacity change from 0 to 140768 Apr 24 23:34:17.650944 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 24 23:34:17.665926 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:34:17.676571 kernel: loop3: detected capacity change from 0 to 142488 Apr 24 23:34:17.698763 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Apr 24 23:34:17.699108 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Apr 24 23:34:17.713250 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:34:17.736879 kernel: loop4: detected capacity change from 0 to 8 Apr 24 23:34:17.741696 kernel: loop5: detected capacity change from 0 to 217752 Apr 24 23:34:17.762852 kernel: loop6: detected capacity change from 0 to 140768 Apr 24 23:34:17.792706 kernel: loop7: detected capacity change from 0 to 142488 Apr 24 23:34:17.819302 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Apr 24 23:34:17.820261 (sd-merge)[1194]: Merged extensions into '/usr'. Apr 24 23:34:17.829481 systemd[1]: Reloading requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... Apr 24 23:34:17.830368 systemd[1]: Reloading... Apr 24 23:34:17.958731 zram_generator::config[1220]: No configuration found. Apr 24 23:34:18.051782 ldconfig[1161]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 24 23:34:18.111051 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:34:18.158943 systemd[1]: Reloading finished in 327 ms. Apr 24 23:34:18.185546 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 24 23:34:18.191169 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 24 23:34:18.203327 systemd[1]: Starting ensure-sysext.service... Apr 24 23:34:18.206198 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:34:18.207686 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 24 23:34:18.221028 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:34:18.225650 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Apr 24 23:34:18.225668 systemd[1]: Reloading... Apr 24 23:34:18.242966 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 24 23:34:18.243616 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 24 23:34:18.245137 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 24 23:34:18.245470 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Apr 24 23:34:18.245608 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Apr 24 23:34:18.249580 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:34:18.249593 systemd-tmpfiles[1264]: Skipping /boot Apr 24 23:34:18.265609 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:34:18.265897 systemd-tmpfiles[1264]: Skipping /boot Apr 24 23:34:18.279919 systemd-udevd[1266]: Using default interface naming scheme 'v255'. Apr 24 23:34:18.332320 zram_generator::config[1292]: No configuration found. Apr 24 23:34:18.533702 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1340) Apr 24 23:34:18.566705 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 24 23:34:18.580700 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 24 23:34:18.584049 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:34:18.590813 kernel: ACPI: button: Power Button [PWRF] Apr 24 23:34:18.613704 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 24 23:34:18.618113 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 24 23:34:18.618346 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 24 23:34:18.628717 kernel: EDAC MC: Ver: 3.0.0 Apr 24 23:34:18.690723 kernel: mousedev: PS/2 mouse device common for all mice Apr 24 23:34:18.694622 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 24 23:34:18.695102 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 24 23:34:18.696167 systemd[1]: Reloading finished in 470 ms. Apr 24 23:34:18.719247 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:34:18.725287 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:34:18.739651 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 24 23:34:18.754963 systemd[1]: Finished ensure-sysext.service. Apr 24 23:34:18.766371 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:34:18.776995 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:34:18.783035 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 24 23:34:18.784213 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:34:18.787823 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 24 23:34:18.792922 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:34:18.796834 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:34:18.801450 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:34:18.805881 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:34:18.807861 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:34:18.812029 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 24 23:34:18.816234 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 24 23:34:18.828707 lvm[1376]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:34:18.827902 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:34:18.838435 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:34:18.851912 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 24 23:34:18.857862 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 24 23:34:18.867853 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:34:18.874430 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:34:18.875389 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:34:18.876238 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:34:18.889372 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:34:18.889987 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:34:18.893844 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 24 23:34:18.894936 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:34:18.895102 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:34:18.896360 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:34:18.896532 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:34:18.901277 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 24 23:34:18.907947 augenrules[1401]: No rules Apr 24 23:34:18.912628 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:34:18.923243 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:34:18.931775 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 24 23:34:18.932598 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:34:18.933060 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:34:18.942082 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 24 23:34:18.953172 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 24 23:34:18.958521 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 24 23:34:18.960354 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 24 23:34:18.965722 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:34:18.970894 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 24 23:34:18.971634 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 24 23:34:18.988123 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 24 23:34:19.000911 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 24 23:34:19.014460 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 24 23:34:19.116228 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:34:19.148335 systemd-networkd[1388]: lo: Link UP Apr 24 23:34:19.148343 systemd-networkd[1388]: lo: Gained carrier Apr 24 23:34:19.150519 systemd-networkd[1388]: Enumeration completed Apr 24 23:34:19.150760 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:34:19.152388 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:34:19.153557 systemd-networkd[1388]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:34:19.154540 systemd-networkd[1388]: eth0: Link UP Apr 24 23:34:19.154596 systemd-networkd[1388]: eth0: Gained carrier Apr 24 23:34:19.154641 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:34:19.159157 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 24 23:34:19.163474 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 24 23:34:19.164883 systemd[1]: Reached target time-set.target - System Time Set. Apr 24 23:34:19.175490 systemd-resolved[1389]: Positive Trust Anchors: Apr 24 23:34:19.175512 systemd-resolved[1389]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:34:19.175544 systemd-resolved[1389]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:34:19.179843 systemd-resolved[1389]: Defaulting to hostname 'linux'. Apr 24 23:34:19.181794 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:34:19.182841 systemd[1]: Reached target network.target - Network. Apr 24 23:34:19.183560 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:34:19.184602 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:34:19.185724 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 24 23:34:19.186546 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 24 23:34:19.187851 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 24 23:34:19.188752 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 24 23:34:19.189559 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 24 23:34:19.190387 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 24 23:34:19.190420 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:34:19.191406 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:34:19.193124 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 24 23:34:19.196116 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 24 23:34:19.202394 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 24 23:34:19.204172 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 24 23:34:19.205213 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:34:19.206075 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:34:19.207139 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:34:19.207186 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:34:19.212766 systemd[1]: Starting containerd.service - containerd container runtime... Apr 24 23:34:19.215900 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 24 23:34:19.226435 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 24 23:34:19.229794 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 24 23:34:19.239890 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 24 23:34:19.242283 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 24 23:34:19.250149 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 24 23:34:19.257791 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 24 23:34:19.259904 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 24 23:34:19.261421 jq[1438]: false Apr 24 23:34:19.266017 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 24 23:34:19.277822 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 24 23:34:19.281843 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 24 23:34:19.282430 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 24 23:34:19.290879 systemd[1]: Starting update-engine.service - Update Engine... Apr 24 23:34:19.302114 extend-filesystems[1439]: Found loop4 Apr 24 23:34:19.302114 extend-filesystems[1439]: Found loop5 Apr 24 23:34:19.302114 extend-filesystems[1439]: Found loop6 Apr 24 23:34:19.302114 extend-filesystems[1439]: Found loop7 Apr 24 23:34:19.302114 extend-filesystems[1439]: Found sda Apr 24 23:34:19.302114 extend-filesystems[1439]: Found sda1 Apr 24 23:34:19.302114 extend-filesystems[1439]: Found sda2 Apr 24 23:34:19.302114 extend-filesystems[1439]: Found sda3 Apr 24 23:34:19.302114 extend-filesystems[1439]: Found usr Apr 24 23:34:19.302114 extend-filesystems[1439]: Found sda4 Apr 24 23:34:19.302114 extend-filesystems[1439]: Found sda6 Apr 24 23:34:19.302114 extend-filesystems[1439]: Found sda7 Apr 24 23:34:19.302114 extend-filesystems[1439]: Found sda9 Apr 24 23:34:19.302114 extend-filesystems[1439]: Checking size of /dev/sda9 Apr 24 23:34:19.407424 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Apr 24 23:34:19.407484 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1298) Apr 24 23:34:19.295787 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 24 23:34:19.357844 dbus-daemon[1437]: [system] SELinux support is enabled Apr 24 23:34:19.411870 coreos-metadata[1436]: Apr 24 23:34:19.312 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Apr 24 23:34:19.417971 extend-filesystems[1439]: Resized partition /dev/sda9 Apr 24 23:34:19.309220 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 24 23:34:19.429030 extend-filesystems[1458]: resize2fs 1.47.1 (20-May-2024) Apr 24 23:34:19.309942 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 24 23:34:19.435808 update_engine[1448]: I20260424 23:34:19.374760 1448 main.cc:92] Flatcar Update Engine starting Apr 24 23:34:19.435808 update_engine[1448]: I20260424 23:34:19.386507 1448 update_check_scheduler.cc:74] Next update check in 4m35s Apr 24 23:34:19.358062 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 24 23:34:19.365719 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 24 23:34:19.365764 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 24 23:34:19.378172 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 24 23:34:19.378193 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 24 23:34:19.447299 jq[1450]: true Apr 24 23:34:19.385553 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 24 23:34:19.449908 tar[1460]: linux-amd64/LICENSE Apr 24 23:34:19.449908 tar[1460]: linux-amd64/helm Apr 24 23:34:19.386212 (ntainerd)[1467]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 24 23:34:19.386440 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 24 23:34:19.398919 systemd[1]: motdgen.service: Deactivated successfully. Apr 24 23:34:19.399134 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 24 23:34:19.406636 systemd[1]: Started update-engine.service - Update Engine. Apr 24 23:34:19.424333 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 24 23:34:19.461497 jq[1472]: true Apr 24 23:34:19.546115 systemd-logind[1447]: Watching system buttons on /dev/input/event2 (Power Button) Apr 24 23:34:19.546144 systemd-logind[1447]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 24 23:34:19.552629 systemd-logind[1447]: New seat seat0. Apr 24 23:34:19.560504 systemd[1]: Started systemd-logind.service - User Login Management. Apr 24 23:34:19.617798 bash[1500]: Updated "/home/core/.ssh/authorized_keys" Apr 24 23:34:19.620591 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 24 23:34:19.630756 systemd[1]: Starting sshkeys.service... Apr 24 23:34:19.642582 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 24 23:34:19.646101 locksmithd[1475]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 24 23:34:19.651000 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 24 23:34:19.681893 coreos-metadata[1508]: Apr 24 23:34:19.681 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Apr 24 23:34:19.704123 containerd[1467]: time="2026-04-24T23:34:19.703099070Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 24 23:34:19.733687 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Apr 24 23:34:19.746703 extend-filesystems[1458]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 24 23:34:19.746703 extend-filesystems[1458]: old_desc_blocks = 1, new_desc_blocks = 10 Apr 24 23:34:19.746703 extend-filesystems[1458]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Apr 24 23:34:19.760320 extend-filesystems[1439]: Resized filesystem in /dev/sda9 Apr 24 23:34:19.763378 containerd[1467]: time="2026-04-24T23:34:19.758754080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:34:19.751750 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 24 23:34:19.752200 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 24 23:34:19.767074 containerd[1467]: time="2026-04-24T23:34:19.767050010Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:34:19.767139 containerd[1467]: time="2026-04-24T23:34:19.767125670Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 24 23:34:19.767207 containerd[1467]: time="2026-04-24T23:34:19.767175920Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 24 23:34:19.767925 containerd[1467]: time="2026-04-24T23:34:19.767908040Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 24 23:34:19.768050 containerd[1467]: time="2026-04-24T23:34:19.768035630Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 24 23:34:19.769264 containerd[1467]: time="2026-04-24T23:34:19.768157370Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:34:19.769264 containerd[1467]: time="2026-04-24T23:34:19.768174500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:34:19.769264 containerd[1467]: time="2026-04-24T23:34:19.768347820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:34:19.769264 containerd[1467]: time="2026-04-24T23:34:19.768362210Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 24 23:34:19.769264 containerd[1467]: time="2026-04-24T23:34:19.768373960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:34:19.769264 containerd[1467]: time="2026-04-24T23:34:19.768382860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 24 23:34:19.769264 containerd[1467]: time="2026-04-24T23:34:19.768476290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:34:19.769264 containerd[1467]: time="2026-04-24T23:34:19.768743770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:34:19.769264 containerd[1467]: time="2026-04-24T23:34:19.768868320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:34:19.769264 containerd[1467]: time="2026-04-24T23:34:19.768881540Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 24 23:34:19.769264 containerd[1467]: time="2026-04-24T23:34:19.768981410Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 24 23:34:19.769485 containerd[1467]: time="2026-04-24T23:34:19.769037510Z" level=info msg="metadata content store policy set" policy=shared Apr 24 23:34:19.775012 containerd[1467]: time="2026-04-24T23:34:19.774984980Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 24 23:34:19.775338 containerd[1467]: time="2026-04-24T23:34:19.775322450Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 24 23:34:19.775440 containerd[1467]: time="2026-04-24T23:34:19.775426330Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 24 23:34:19.775521 containerd[1467]: time="2026-04-24T23:34:19.775507920Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 24 23:34:19.775598 containerd[1467]: time="2026-04-24T23:34:19.775585670Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 24 23:34:19.776156 containerd[1467]: time="2026-04-24T23:34:19.776127590Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 24 23:34:19.777069 containerd[1467]: time="2026-04-24T23:34:19.777050720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 24 23:34:19.777429 containerd[1467]: time="2026-04-24T23:34:19.777412630Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 24 23:34:19.777833 containerd[1467]: time="2026-04-24T23:34:19.777705530Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 24 23:34:19.777833 containerd[1467]: time="2026-04-24T23:34:19.777723670Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 24 23:34:19.777833 containerd[1467]: time="2026-04-24T23:34:19.777741850Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 24 23:34:19.777833 containerd[1467]: time="2026-04-24T23:34:19.777753680Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 24 23:34:19.777833 containerd[1467]: time="2026-04-24T23:34:19.777764360Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 24 23:34:19.777833 containerd[1467]: time="2026-04-24T23:34:19.777775520Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 24 23:34:19.777833 containerd[1467]: time="2026-04-24T23:34:19.777788570Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 24 23:34:19.777833 containerd[1467]: time="2026-04-24T23:34:19.777802760Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 24 23:34:19.777833 containerd[1467]: time="2026-04-24T23:34:19.777813040Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 24 23:34:19.778357 containerd[1467]: time="2026-04-24T23:34:19.778013370Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 24 23:34:19.778357 containerd[1467]: time="2026-04-24T23:34:19.778047760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 24 23:34:19.778357 containerd[1467]: time="2026-04-24T23:34:19.778063070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 24 23:34:19.778357 containerd[1467]: time="2026-04-24T23:34:19.778074430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 24 23:34:19.778357 containerd[1467]: time="2026-04-24T23:34:19.778085430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 24 23:34:19.778357 containerd[1467]: time="2026-04-24T23:34:19.778095980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 24 23:34:19.778357 containerd[1467]: time="2026-04-24T23:34:19.778107430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 24 23:34:19.778357 containerd[1467]: time="2026-04-24T23:34:19.778116910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 24 23:34:19.778357 containerd[1467]: time="2026-04-24T23:34:19.778126910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 24 23:34:19.778357 containerd[1467]: time="2026-04-24T23:34:19.778143710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 24 23:34:19.778357 containerd[1467]: time="2026-04-24T23:34:19.778156410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 24 23:34:19.778357 containerd[1467]: time="2026-04-24T23:34:19.778166390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 24 23:34:19.778357 containerd[1467]: time="2026-04-24T23:34:19.778176820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 24 23:34:19.778357 containerd[1467]: time="2026-04-24T23:34:19.778186690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 24 23:34:19.780527 containerd[1467]: time="2026-04-24T23:34:19.778862200Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 24 23:34:19.780527 containerd[1467]: time="2026-04-24T23:34:19.778889610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 24 23:34:19.780527 containerd[1467]: time="2026-04-24T23:34:19.778913200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 24 23:34:19.780527 containerd[1467]: time="2026-04-24T23:34:19.778926290Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 24 23:34:19.780527 containerd[1467]: time="2026-04-24T23:34:19.779246280Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 24 23:34:19.780527 containerd[1467]: time="2026-04-24T23:34:19.779265660Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 24 23:34:19.780527 containerd[1467]: time="2026-04-24T23:34:19.779275690Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 24 23:34:19.780527 containerd[1467]: time="2026-04-24T23:34:19.779342780Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 24 23:34:19.780527 containerd[1467]: time="2026-04-24T23:34:19.779355210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 24 23:34:19.780527 containerd[1467]: time="2026-04-24T23:34:19.779367240Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 24 23:34:19.780527 containerd[1467]: time="2026-04-24T23:34:19.779383470Z" level=info msg="NRI interface is disabled by configuration." Apr 24 23:34:19.780527 containerd[1467]: time="2026-04-24T23:34:19.779396820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 24 23:34:19.780834 containerd[1467]: time="2026-04-24T23:34:19.779658560Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 24 23:34:19.780995 containerd[1467]: time="2026-04-24T23:34:19.780979960Z" level=info msg="Connect containerd service" Apr 24 23:34:19.781061 containerd[1467]: time="2026-04-24T23:34:19.781049110Z" level=info msg="using legacy CRI server" Apr 24 23:34:19.781111 containerd[1467]: time="2026-04-24T23:34:19.781098860Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 24 23:34:19.781874 containerd[1467]: time="2026-04-24T23:34:19.781209490Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 24 23:34:19.782659 containerd[1467]: time="2026-04-24T23:34:19.782638400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 23:34:19.782971 containerd[1467]: time="2026-04-24T23:34:19.782941640Z" level=info msg="Start subscribing containerd event" Apr 24 23:34:19.783032 containerd[1467]: time="2026-04-24T23:34:19.783020270Z" level=info msg="Start recovering state" Apr 24 23:34:19.783169 containerd[1467]: time="2026-04-24T23:34:19.783155070Z" level=info msg="Start event monitor" Apr 24 23:34:19.783696 containerd[1467]: time="2026-04-24T23:34:19.783431360Z" level=info msg="Start snapshots syncer" Apr 24 23:34:19.783696 containerd[1467]: time="2026-04-24T23:34:19.783443900Z" level=info msg="Start cni network conf syncer for default" Apr 24 23:34:19.783696 containerd[1467]: time="2026-04-24T23:34:19.783451170Z" level=info msg="Start streaming server" Apr 24 23:34:19.785141 containerd[1467]: time="2026-04-24T23:34:19.784589050Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 24 23:34:19.785141 containerd[1467]: time="2026-04-24T23:34:19.784647930Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 24 23:34:19.785392 systemd[1]: Started containerd.service - containerd container runtime. Apr 24 23:34:19.786509 containerd[1467]: time="2026-04-24T23:34:19.785825300Z" level=info msg="containerd successfully booted in 0.085522s" Apr 24 23:34:19.849608 sshd_keygen[1473]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 24 23:34:19.878913 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 24 23:34:19.888269 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 24 23:34:19.896857 systemd[1]: issuegen.service: Deactivated successfully. Apr 24 23:34:19.897095 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 24 23:34:19.898755 systemd-networkd[1388]: eth0: DHCPv4 address 172.238.161.65/24, gateway 172.238.161.1 acquired from 23.213.15.243 Apr 24 23:34:19.899075 dbus-daemon[1437]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1388 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 24 23:34:19.901493 systemd-timesyncd[1390]: Network configuration changed, trying to establish connection. Apr 24 23:34:19.914272 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 24 23:34:19.920955 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 24 23:34:19.956446 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 24 23:34:19.968214 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 24 23:34:19.975105 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 24 23:34:20.004963 systemd[1]: Reached target getty.target - Login Prompts. Apr 24 23:34:20.024103 dbus-daemon[1437]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 24 23:34:20.024388 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 24 23:34:20.025488 dbus-daemon[1437]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1530 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 24 23:34:20.036101 systemd[1]: Starting polkit.service - Authorization Manager... Apr 24 23:34:20.048289 polkitd[1535]: Started polkitd version 121 Apr 24 23:34:20.053140 polkitd[1535]: Loading rules from directory /etc/polkit-1/rules.d Apr 24 23:34:20.053220 polkitd[1535]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 24 23:34:20.054432 polkitd[1535]: Finished loading, compiling and executing 2 rules Apr 24 23:34:20.055292 dbus-daemon[1437]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 24 23:34:20.055645 systemd[1]: Started polkit.service - Authorization Manager. Apr 24 23:34:20.057978 polkitd[1535]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 24 23:34:20.069646 systemd-hostnamed[1530]: Hostname set to <172-238-161-65> (transient) Apr 24 23:34:20.070011 systemd-resolved[1389]: System hostname changed to '172-238-161-65'. Apr 24 23:34:20.212612 tar[1460]: linux-amd64/README.md Apr 24 23:34:20.224893 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 24 23:34:20.323983 coreos-metadata[1436]: Apr 24 23:34:20.323 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Apr 24 23:34:20.417805 coreos-metadata[1436]: Apr 24 23:34:20.417 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Apr 24 23:34:20.602439 coreos-metadata[1436]: Apr 24 23:34:20.602 INFO Fetch successful Apr 24 23:34:20.602439 coreos-metadata[1436]: Apr 24 23:34:20.602 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Apr 24 23:34:20.693024 coreos-metadata[1508]: Apr 24 23:34:20.692 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Apr 24 23:34:20.786772 coreos-metadata[1508]: Apr 24 23:34:20.786 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Apr 24 23:34:20.863177 coreos-metadata[1436]: Apr 24 23:34:20.862 INFO Fetch successful Apr 24 23:34:20.920739 coreos-metadata[1508]: Apr 24 23:34:20.920 INFO Fetch successful Apr 24 23:34:20.936460 update-ssh-keys[1564]: Updated "/home/core/.ssh/authorized_keys" Apr 24 23:34:20.937038 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 24 23:34:20.940768 systemd[1]: Finished sshkeys.service. Apr 24 23:34:20.948279 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 24 23:34:20.950014 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 24 23:34:21.019900 systemd-networkd[1388]: eth0: Gained IPv6LL Apr 24 23:34:21.023401 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 24 23:34:21.024940 systemd[1]: Reached target network-online.target - Network is Online. Apr 24 23:34:21.039259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:34:21.042506 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 24 23:34:21.073228 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 24 23:34:21.981024 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:34:21.982821 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 24 23:34:21.984340 systemd[1]: Startup finished in 1.081s (kernel) + 8.405s (initrd) + 5.890s (userspace) = 15.377s. Apr 24 23:34:21.988144 (kubelet)[1590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:34:22.472959 kubelet[1590]: E0424 23:34:22.472841 1590 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:34:22.476725 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:34:22.476949 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:34:23.114286 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 24 23:34:23.118868 systemd[1]: Started sshd@0-172.238.161.65:22-4.175.71.9:38806.service - OpenSSH per-connection server daemon (4.175.71.9:38806). Apr 24 23:34:23.722891 sshd[1602]: Accepted publickey for core from 4.175.71.9 port 38806 ssh2: RSA SHA256:qGAEp4xo5oyI2b9uarOwriHAiNNoDlNDl+jElKCVlVI Apr 24 23:34:23.725080 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:34:23.734177 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 24 23:34:23.742051 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 24 23:34:23.746058 systemd-logind[1447]: New session 1 of user core. Apr 24 23:34:23.756174 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 24 23:34:23.762896 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 24 23:34:23.767952 (systemd)[1606]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 24 23:34:23.879039 systemd[1606]: Queued start job for default target default.target. Apr 24 23:34:23.889073 systemd[1606]: Created slice app.slice - User Application Slice. Apr 24 23:34:23.889105 systemd[1606]: Reached target paths.target - Paths. Apr 24 23:34:23.889120 systemd[1606]: Reached target timers.target - Timers. Apr 24 23:34:23.890756 systemd[1606]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 24 23:34:23.903542 systemd[1606]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 24 23:34:23.903913 systemd[1606]: Reached target sockets.target - Sockets. Apr 24 23:34:23.903933 systemd[1606]: Reached target basic.target - Basic System. Apr 24 23:34:23.903994 systemd[1606]: Reached target default.target - Main User Target. Apr 24 23:34:23.904053 systemd[1606]: Startup finished in 126ms. Apr 24 23:34:23.904199 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 24 23:34:23.912817 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 24 23:34:24.355272 systemd[1]: Started sshd@1-172.238.161.65:22-4.175.71.9:38818.service - OpenSSH per-connection server daemon (4.175.71.9:38818). Apr 24 23:34:24.952522 sshd[1617]: Accepted publickey for core from 4.175.71.9 port 38818 ssh2: RSA SHA256:qGAEp4xo5oyI2b9uarOwriHAiNNoDlNDl+jElKCVlVI Apr 24 23:34:24.953369 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:34:24.958845 systemd-logind[1447]: New session 2 of user core. Apr 24 23:34:24.967825 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 24 23:34:25.379737 sshd[1617]: pam_unix(sshd:session): session closed for user core Apr 24 23:34:25.383611 systemd[1]: sshd@1-172.238.161.65:22-4.175.71.9:38818.service: Deactivated successfully. Apr 24 23:34:25.386152 systemd[1]: session-2.scope: Deactivated successfully. Apr 24 23:34:25.387529 systemd-logind[1447]: Session 2 logged out. Waiting for processes to exit. Apr 24 23:34:25.389109 systemd-logind[1447]: Removed session 2. Apr 24 23:34:25.489735 systemd[1]: Started sshd@2-172.238.161.65:22-4.175.71.9:57888.service - OpenSSH per-connection server daemon (4.175.71.9:57888). Apr 24 23:34:26.097042 sshd[1624]: Accepted publickey for core from 4.175.71.9 port 57888 ssh2: RSA SHA256:qGAEp4xo5oyI2b9uarOwriHAiNNoDlNDl+jElKCVlVI Apr 24 23:34:26.102013 sshd[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:34:26.108095 systemd-logind[1447]: New session 3 of user core. Apr 24 23:34:26.122934 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 24 23:34:26.526277 sshd[1624]: pam_unix(sshd:session): session closed for user core Apr 24 23:34:26.529539 systemd[1]: sshd@2-172.238.161.65:22-4.175.71.9:57888.service: Deactivated successfully. Apr 24 23:34:26.531765 systemd[1]: session-3.scope: Deactivated successfully. Apr 24 23:34:26.533172 systemd-logind[1447]: Session 3 logged out. Waiting for processes to exit. Apr 24 23:34:26.534580 systemd-logind[1447]: Removed session 3. Apr 24 23:34:26.637180 systemd[1]: Started sshd@3-172.238.161.65:22-4.175.71.9:57896.service - OpenSSH per-connection server daemon (4.175.71.9:57896). Apr 24 23:34:27.241164 sshd[1631]: Accepted publickey for core from 4.175.71.9 port 57896 ssh2: RSA SHA256:qGAEp4xo5oyI2b9uarOwriHAiNNoDlNDl+jElKCVlVI Apr 24 23:34:27.241832 sshd[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:34:27.246720 systemd-logind[1447]: New session 4 of user core. Apr 24 23:34:27.255845 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 24 23:34:27.672011 sshd[1631]: pam_unix(sshd:session): session closed for user core Apr 24 23:34:27.675489 systemd[1]: sshd@3-172.238.161.65:22-4.175.71.9:57896.service: Deactivated successfully. Apr 24 23:34:27.677485 systemd[1]: session-4.scope: Deactivated successfully. Apr 24 23:34:27.678906 systemd-logind[1447]: Session 4 logged out. Waiting for processes to exit. Apr 24 23:34:27.680045 systemd-logind[1447]: Removed session 4. Apr 24 23:34:27.782965 systemd[1]: Started sshd@4-172.238.161.65:22-4.175.71.9:57900.service - OpenSSH per-connection server daemon (4.175.71.9:57900). Apr 24 23:34:28.385703 sshd[1638]: Accepted publickey for core from 4.175.71.9 port 57900 ssh2: RSA SHA256:qGAEp4xo5oyI2b9uarOwriHAiNNoDlNDl+jElKCVlVI Apr 24 23:34:28.386780 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:34:28.391142 systemd-logind[1447]: New session 5 of user core. Apr 24 23:34:28.397955 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 24 23:34:28.730823 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 24 23:34:28.731190 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:34:28.746852 sudo[1641]: pam_unix(sudo:session): session closed for user root Apr 24 23:34:28.844575 sshd[1638]: pam_unix(sshd:session): session closed for user core Apr 24 23:34:28.848496 systemd[1]: sshd@4-172.238.161.65:22-4.175.71.9:57900.service: Deactivated successfully. Apr 24 23:34:28.850955 systemd[1]: session-5.scope: Deactivated successfully. Apr 24 23:34:28.852647 systemd-logind[1447]: Session 5 logged out. Waiting for processes to exit. Apr 24 23:34:28.854192 systemd-logind[1447]: Removed session 5. Apr 24 23:34:28.953647 systemd[1]: Started sshd@5-172.238.161.65:22-4.175.71.9:57904.service - OpenSSH per-connection server daemon (4.175.71.9:57904). Apr 24 23:34:29.587926 sshd[1646]: Accepted publickey for core from 4.175.71.9 port 57904 ssh2: RSA SHA256:qGAEp4xo5oyI2b9uarOwriHAiNNoDlNDl+jElKCVlVI Apr 24 23:34:29.588622 sshd[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:34:29.595695 systemd-logind[1447]: New session 6 of user core. Apr 24 23:34:29.602025 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 24 23:34:29.941646 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 24 23:34:29.942057 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:34:29.946871 sudo[1650]: pam_unix(sudo:session): session closed for user root Apr 24 23:34:29.954583 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 24 23:34:29.955163 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:34:29.972095 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 24 23:34:29.974309 auditctl[1653]: No rules Apr 24 23:34:29.975974 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 23:34:29.976261 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 24 23:34:29.978769 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:34:30.011472 augenrules[1671]: No rules Apr 24 23:34:30.013249 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:34:30.015278 sudo[1649]: pam_unix(sudo:session): session closed for user root Apr 24 23:34:30.116889 sshd[1646]: pam_unix(sshd:session): session closed for user core Apr 24 23:34:30.120532 systemd[1]: sshd@5-172.238.161.65:22-4.175.71.9:57904.service: Deactivated successfully. Apr 24 23:34:30.122662 systemd[1]: session-6.scope: Deactivated successfully. Apr 24 23:34:30.124147 systemd-logind[1447]: Session 6 logged out. Waiting for processes to exit. Apr 24 23:34:30.125407 systemd-logind[1447]: Removed session 6. Apr 24 23:34:30.228494 systemd[1]: Started sshd@6-172.238.161.65:22-4.175.71.9:57920.service - OpenSSH per-connection server daemon (4.175.71.9:57920). Apr 24 23:34:30.358069 systemd-timesyncd[1390]: Timed out waiting for reply from 23.150.40.242:123 (0.flatcar.pool.ntp.org). Apr 24 23:34:30.432735 systemd-timesyncd[1390]: Contacted time server 138.68.201.49:123 (0.flatcar.pool.ntp.org). Apr 24 23:34:30.432819 systemd-timesyncd[1390]: Initial clock synchronization to Fri 2026-04-24 23:34:30.356637 UTC. Apr 24 23:34:30.866466 sshd[1679]: Accepted publickey for core from 4.175.71.9 port 57920 ssh2: RSA SHA256:qGAEp4xo5oyI2b9uarOwriHAiNNoDlNDl+jElKCVlVI Apr 24 23:34:30.868442 sshd[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:34:30.874227 systemd-logind[1447]: New session 7 of user core. Apr 24 23:34:30.880818 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 24 23:34:31.217496 sudo[1682]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 24 23:34:31.218119 sudo[1682]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:34:31.496996 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 24 23:34:31.497286 (dockerd)[1699]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 24 23:34:31.778592 dockerd[1699]: time="2026-04-24T23:34:31.777998482Z" level=info msg="Starting up" Apr 24 23:34:31.856238 systemd[1]: var-lib-docker-metacopy\x2dcheck1847320432-merged.mount: Deactivated successfully. Apr 24 23:34:31.882330 dockerd[1699]: time="2026-04-24T23:34:31.882298464Z" level=info msg="Loading containers: start." Apr 24 23:34:32.002952 kernel: Initializing XFRM netlink socket Apr 24 23:34:32.085186 systemd-networkd[1388]: docker0: Link UP Apr 24 23:34:32.098202 dockerd[1699]: time="2026-04-24T23:34:32.098162509Z" level=info msg="Loading containers: done." Apr 24 23:34:32.115894 dockerd[1699]: time="2026-04-24T23:34:32.115854506Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 24 23:34:32.116047 dockerd[1699]: time="2026-04-24T23:34:32.115949402Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 24 23:34:32.116094 dockerd[1699]: time="2026-04-24T23:34:32.116077819Z" level=info msg="Daemon has completed initialization" Apr 24 23:34:32.147314 dockerd[1699]: time="2026-04-24T23:34:32.147262796Z" level=info msg="API listen on /run/docker.sock" Apr 24 23:34:32.147602 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 24 23:34:32.608783 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 24 23:34:32.625881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:34:32.774270 containerd[1467]: time="2026-04-24T23:34:32.774202992Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 24 23:34:32.798347 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:34:32.812013 (kubelet)[1846]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:34:32.866113 kubelet[1846]: E0424 23:34:32.865796 1846 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:34:32.871881 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:34:32.872156 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:34:33.348102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3347707348.mount: Deactivated successfully. Apr 24 23:34:34.236823 containerd[1467]: time="2026-04-24T23:34:34.236775867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:34.238072 containerd[1467]: time="2026-04-24T23:34:34.237864561Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=27579429" Apr 24 23:34:34.238935 containerd[1467]: time="2026-04-24T23:34:34.238603201Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:34.241086 containerd[1467]: time="2026-04-24T23:34:34.241058343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:34.242216 containerd[1467]: time="2026-04-24T23:34:34.242187717Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 1.467953938s" Apr 24 23:34:34.242265 containerd[1467]: time="2026-04-24T23:34:34.242219206Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 24 23:34:34.243355 containerd[1467]: time="2026-04-24T23:34:34.243336476Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 24 23:34:35.214972 containerd[1467]: time="2026-04-24T23:34:35.214899313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:35.216198 containerd[1467]: time="2026-04-24T23:34:35.216156460Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=21451665" Apr 24 23:34:35.216246 containerd[1467]: time="2026-04-24T23:34:35.216224231Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:35.221267 containerd[1467]: time="2026-04-24T23:34:35.221095517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:35.221934 containerd[1467]: time="2026-04-24T23:34:35.221910684Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 978.49141ms" Apr 24 23:34:35.222008 containerd[1467]: time="2026-04-24T23:34:35.221992546Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 24 23:34:35.223178 containerd[1467]: time="2026-04-24T23:34:35.223050546Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 24 23:34:36.044721 containerd[1467]: time="2026-04-24T23:34:36.044342785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:36.045905 containerd[1467]: time="2026-04-24T23:34:36.045233335Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=15555296" Apr 24 23:34:36.045905 containerd[1467]: time="2026-04-24T23:34:36.045864811Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:36.048843 containerd[1467]: time="2026-04-24T23:34:36.048804630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:36.049707 containerd[1467]: time="2026-04-24T23:34:36.049586212Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 826.332865ms" Apr 24 23:34:36.049707 containerd[1467]: time="2026-04-24T23:34:36.049612991Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 24 23:34:36.050543 containerd[1467]: time="2026-04-24T23:34:36.050521663Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 24 23:34:37.010622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2301918937.mount: Deactivated successfully. Apr 24 23:34:37.235174 containerd[1467]: time="2026-04-24T23:34:37.235125777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:37.236151 containerd[1467]: time="2026-04-24T23:34:37.236051652Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=25699931" Apr 24 23:34:37.237835 containerd[1467]: time="2026-04-24T23:34:37.236573990Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:37.239809 containerd[1467]: time="2026-04-24T23:34:37.239177688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:37.239809 containerd[1467]: time="2026-04-24T23:34:37.239695158Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 1.189121767s" Apr 24 23:34:37.239809 containerd[1467]: time="2026-04-24T23:34:37.239719454Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 24 23:34:37.240236 containerd[1467]: time="2026-04-24T23:34:37.240203726Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 24 23:34:37.763267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount498434392.mount: Deactivated successfully. Apr 24 23:34:38.586057 containerd[1467]: time="2026-04-24T23:34:38.586018907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:38.586795 containerd[1467]: time="2026-04-24T23:34:38.586640505Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556548" Apr 24 23:34:38.588707 containerd[1467]: time="2026-04-24T23:34:38.588661366Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:38.592292 containerd[1467]: time="2026-04-24T23:34:38.592253397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:38.594263 containerd[1467]: time="2026-04-24T23:34:38.593719341Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 1.353488656s" Apr 24 23:34:38.594263 containerd[1467]: time="2026-04-24T23:34:38.593746589Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 24 23:34:38.594491 containerd[1467]: time="2026-04-24T23:34:38.594473563Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 24 23:34:39.086803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4061297027.mount: Deactivated successfully. Apr 24 23:34:39.091589 containerd[1467]: time="2026-04-24T23:34:39.091518791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:39.092537 containerd[1467]: time="2026-04-24T23:34:39.092513681Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321224" Apr 24 23:34:39.094037 containerd[1467]: time="2026-04-24T23:34:39.092887300Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:39.094727 containerd[1467]: time="2026-04-24T23:34:39.094692084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:39.095738 containerd[1467]: time="2026-04-24T23:34:39.095545346Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 500.999429ms" Apr 24 23:34:39.095738 containerd[1467]: time="2026-04-24T23:34:39.095578807Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 24 23:34:39.096657 containerd[1467]: time="2026-04-24T23:34:39.096284920Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 24 23:34:39.712168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount723874563.mount: Deactivated successfully. Apr 24 23:34:40.352451 containerd[1467]: time="2026-04-24T23:34:40.352397802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:40.353291 containerd[1467]: time="2026-04-24T23:34:40.353243516Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23644471" Apr 24 23:34:40.354157 containerd[1467]: time="2026-04-24T23:34:40.353898015Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:40.357780 containerd[1467]: time="2026-04-24T23:34:40.356494147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:40.357780 containerd[1467]: time="2026-04-24T23:34:40.357645826Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.261261956s" Apr 24 23:34:40.357780 containerd[1467]: time="2026-04-24T23:34:40.357694227Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 24 23:34:41.397272 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:34:41.411232 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:34:41.445633 systemd[1]: Reloading requested from client PID 2075 ('systemctl') (unit session-7.scope)... Apr 24 23:34:41.445800 systemd[1]: Reloading... Apr 24 23:34:41.605742 zram_generator::config[2118]: No configuration found. Apr 24 23:34:41.720841 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:34:41.794416 systemd[1]: Reloading finished in 348 ms. Apr 24 23:34:41.852921 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 24 23:34:41.853058 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 24 23:34:41.853415 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:34:41.859916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:34:42.023804 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:34:42.029302 (kubelet)[2170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:34:42.068552 kubelet[2170]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:34:42.218533 kubelet[2170]: I0424 23:34:42.218458 2170 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 24 23:34:42.218533 kubelet[2170]: I0424 23:34:42.218516 2170 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:34:42.218533 kubelet[2170]: I0424 23:34:42.218540 2170 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 24 23:34:42.218533 kubelet[2170]: I0424 23:34:42.218546 2170 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:34:42.219047 kubelet[2170]: I0424 23:34:42.219025 2170 server.go:951] "Client rotation is on, will bootstrap in background" Apr 24 23:34:42.226690 kubelet[2170]: E0424 23:34:42.224897 2170 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.238.161.65:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.238.161.65:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 23:34:42.226690 kubelet[2170]: I0424 23:34:42.226105 2170 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:34:42.230341 kubelet[2170]: E0424 23:34:42.230317 2170 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:34:42.230390 kubelet[2170]: I0424 23:34:42.230355 2170 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 24 23:34:42.234327 kubelet[2170]: I0424 23:34:42.234313 2170 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 24 23:34:42.236554 kubelet[2170]: I0424 23:34:42.236513 2170 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:34:42.236903 kubelet[2170]: I0424 23:34:42.236547 2170 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-238-161-65","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 23:34:42.236903 kubelet[2170]: I0424 23:34:42.236901 2170 topology_manager.go:143] "Creating topology manager with none policy" Apr 24 23:34:42.237010 kubelet[2170]: I0424 23:34:42.236910 2170 container_manager_linux.go:308] "Creating device plugin manager" Apr 24 23:34:42.237010 kubelet[2170]: I0424 23:34:42.236999 2170 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 24 23:34:42.238086 kubelet[2170]: I0424 23:34:42.238071 2170 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 24 23:34:42.238243 kubelet[2170]: I0424 23:34:42.238230 2170 kubelet.go:482] "Attempting to sync node with API server" Apr 24 23:34:42.238287 kubelet[2170]: I0424 23:34:42.238252 2170 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:34:42.238287 kubelet[2170]: I0424 23:34:42.238279 2170 kubelet.go:394] "Adding apiserver pod source" Apr 24 23:34:42.238336 kubelet[2170]: I0424 23:34:42.238289 2170 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:34:42.241130 kubelet[2170]: I0424 23:34:42.240376 2170 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:34:42.242536 kubelet[2170]: I0424 23:34:42.242258 2170 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:34:42.242536 kubelet[2170]: I0424 23:34:42.242285 2170 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 24 23:34:42.242536 kubelet[2170]: W0424 23:34:42.242355 2170 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 24 23:34:42.246573 kubelet[2170]: I0424 23:34:42.245271 2170 server.go:1257] "Started kubelet" Apr 24 23:34:42.246845 kubelet[2170]: I0424 23:34:42.246825 2170 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:34:42.247661 kubelet[2170]: I0424 23:34:42.247647 2170 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:34:42.250084 kubelet[2170]: I0424 23:34:42.250046 2170 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:34:42.250196 kubelet[2170]: I0424 23:34:42.250183 2170 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 24 23:34:42.250770 kubelet[2170]: I0424 23:34:42.250756 2170 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:34:42.253001 kubelet[2170]: E0424 23:34:42.250904 2170 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.238.161.65:6443/api/v1/namespaces/default/events\": dial tcp 172.238.161.65:6443: connect: connection refused" event="&Event{ObjectMeta:{172-238-161-65.18a96f1941ec71de default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-238-161-65,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-238-161-65,},FirstTimestamp:2026-04-24 23:34:42.24525155 +0000 UTC m=+0.211397582,LastTimestamp:2026-04-24 23:34:42.24525155 +0000 UTC m=+0.211397582,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-238-161-65,}" Apr 24 23:34:42.254663 kubelet[2170]: I0424 23:34:42.254647 2170 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 24 23:34:42.255728 kubelet[2170]: E0424 23:34:42.255702 2170 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 23:34:42.256145 kubelet[2170]: I0424 23:34:42.256121 2170 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:34:42.258562 kubelet[2170]: I0424 23:34:42.258547 2170 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 24 23:34:42.259109 kubelet[2170]: E0424 23:34:42.259092 2170 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-238-161-65\" not found" Apr 24 23:34:42.261573 kubelet[2170]: I0424 23:34:42.261544 2170 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:34:42.261886 kubelet[2170]: I0424 23:34:42.261837 2170 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:34:42.263909 kubelet[2170]: I0424 23:34:42.263889 2170 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 24 23:34:42.264923 kubelet[2170]: I0424 23:34:42.264211 2170 reconciler.go:29] "Reconciler: start to sync state" Apr 24 23:34:42.264923 kubelet[2170]: E0424 23:34:42.264339 2170 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.161.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-161-65?timeout=10s\": dial tcp 172.238.161.65:6443: connect: connection refused" interval="200ms" Apr 24 23:34:42.265960 kubelet[2170]: I0424 23:34:42.265664 2170 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 24 23:34:42.266093 kubelet[2170]: I0424 23:34:42.266080 2170 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:34:42.287301 kubelet[2170]: I0424 23:34:42.287206 2170 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 24 23:34:42.287395 kubelet[2170]: I0424 23:34:42.287383 2170 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 24 23:34:42.287459 kubelet[2170]: I0424 23:34:42.287449 2170 kubelet.go:2501] "Starting kubelet main sync loop" Apr 24 23:34:42.287551 kubelet[2170]: E0424 23:34:42.287534 2170 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:34:42.307623 kubelet[2170]: I0424 23:34:42.307598 2170 cpu_manager.go:225] "Starting" policy="none" Apr 24 23:34:42.307623 kubelet[2170]: I0424 23:34:42.307614 2170 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 24 23:34:42.307729 kubelet[2170]: I0424 23:34:42.307631 2170 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 24 23:34:42.309372 kubelet[2170]: I0424 23:34:42.309356 2170 policy_none.go:50] "Start" Apr 24 23:34:42.309614 kubelet[2170]: I0424 23:34:42.309374 2170 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 24 23:34:42.309614 kubelet[2170]: I0424 23:34:42.309387 2170 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 24 23:34:42.310373 kubelet[2170]: I0424 23:34:42.310359 2170 policy_none.go:44] "Start" Apr 24 23:34:42.315692 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 24 23:34:42.329086 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 24 23:34:42.334519 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 24 23:34:42.347275 kubelet[2170]: E0424 23:34:42.346220 2170 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:34:42.347275 kubelet[2170]: I0424 23:34:42.346658 2170 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 24 23:34:42.347275 kubelet[2170]: I0424 23:34:42.346695 2170 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:34:42.347275 kubelet[2170]: I0424 23:34:42.346985 2170 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 24 23:34:42.349074 kubelet[2170]: E0424 23:34:42.349052 2170 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:34:42.349195 kubelet[2170]: E0424 23:34:42.349177 2170 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-238-161-65\" not found" Apr 24 23:34:42.407647 systemd[1]: Created slice kubepods-burstable-pod7107e88a045fbdbe493cb46a69cd72d2.slice - libcontainer container kubepods-burstable-pod7107e88a045fbdbe493cb46a69cd72d2.slice. Apr 24 23:34:42.426991 kubelet[2170]: E0424 23:34:42.426962 2170 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-161-65\" not found" node="172-238-161-65" Apr 24 23:34:42.430326 systemd[1]: Created slice kubepods-burstable-pod23a77ea1939ce7330079bf948095bd9a.slice - libcontainer container kubepods-burstable-pod23a77ea1939ce7330079bf948095bd9a.slice. Apr 24 23:34:42.434546 kubelet[2170]: E0424 23:34:42.434507 2170 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-161-65\" not found" node="172-238-161-65" Apr 24 23:34:42.438870 systemd[1]: Created slice kubepods-burstable-pod95d044c9cb9e2b80c4195d77fce91dfc.slice - libcontainer container kubepods-burstable-pod95d044c9cb9e2b80c4195d77fce91dfc.slice. Apr 24 23:34:42.441847 kubelet[2170]: E0424 23:34:42.441818 2170 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-161-65\" not found" node="172-238-161-65" Apr 24 23:34:42.450329 kubelet[2170]: I0424 23:34:42.450080 2170 kubelet_node_status.go:74] "Attempting to register node" node="172-238-161-65" Apr 24 23:34:42.451012 kubelet[2170]: E0424 23:34:42.450935 2170 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.238.161.65:6443/api/v1/nodes\": dial tcp 172.238.161.65:6443: connect: connection refused" node="172-238-161-65" Apr 24 23:34:42.464958 kubelet[2170]: E0424 23:34:42.464906 2170 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.161.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-161-65?timeout=10s\": dial tcp 172.238.161.65:6443: connect: connection refused" interval="400ms" Apr 24 23:34:42.465945 kubelet[2170]: I0424 23:34:42.465887 2170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7107e88a045fbdbe493cb46a69cd72d2-ca-certs\") pod \"kube-apiserver-172-238-161-65\" (UID: \"7107e88a045fbdbe493cb46a69cd72d2\") " pod="kube-system/kube-apiserver-172-238-161-65" Apr 24 23:34:42.465945 kubelet[2170]: I0424 23:34:42.465934 2170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a77ea1939ce7330079bf948095bd9a-flexvolume-dir\") pod \"kube-controller-manager-172-238-161-65\" (UID: \"23a77ea1939ce7330079bf948095bd9a\") " pod="kube-system/kube-controller-manager-172-238-161-65" Apr 24 23:34:42.465945 kubelet[2170]: I0424 23:34:42.465957 2170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a77ea1939ce7330079bf948095bd9a-k8s-certs\") pod \"kube-controller-manager-172-238-161-65\" (UID: \"23a77ea1939ce7330079bf948095bd9a\") " pod="kube-system/kube-controller-manager-172-238-161-65" Apr 24 23:34:42.466147 kubelet[2170]: I0424 23:34:42.465981 2170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95d044c9cb9e2b80c4195d77fce91dfc-kubeconfig\") pod \"kube-scheduler-172-238-161-65\" (UID: \"95d044c9cb9e2b80c4195d77fce91dfc\") " pod="kube-system/kube-scheduler-172-238-161-65" Apr 24 23:34:42.466147 kubelet[2170]: I0424 23:34:42.465998 2170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7107e88a045fbdbe493cb46a69cd72d2-k8s-certs\") pod \"kube-apiserver-172-238-161-65\" (UID: \"7107e88a045fbdbe493cb46a69cd72d2\") " pod="kube-system/kube-apiserver-172-238-161-65" Apr 24 23:34:42.466147 kubelet[2170]: I0424 23:34:42.466016 2170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7107e88a045fbdbe493cb46a69cd72d2-usr-share-ca-certificates\") pod \"kube-apiserver-172-238-161-65\" (UID: \"7107e88a045fbdbe493cb46a69cd72d2\") " pod="kube-system/kube-apiserver-172-238-161-65" Apr 24 23:34:42.466147 kubelet[2170]: I0424 23:34:42.466032 2170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a77ea1939ce7330079bf948095bd9a-ca-certs\") pod \"kube-controller-manager-172-238-161-65\" (UID: \"23a77ea1939ce7330079bf948095bd9a\") " pod="kube-system/kube-controller-manager-172-238-161-65" Apr 24 23:34:42.466147 kubelet[2170]: I0424 23:34:42.466056 2170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a77ea1939ce7330079bf948095bd9a-kubeconfig\") pod \"kube-controller-manager-172-238-161-65\" (UID: \"23a77ea1939ce7330079bf948095bd9a\") " pod="kube-system/kube-controller-manager-172-238-161-65" Apr 24 23:34:42.466478 kubelet[2170]: I0424 23:34:42.466075 2170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a77ea1939ce7330079bf948095bd9a-usr-share-ca-certificates\") pod \"kube-controller-manager-172-238-161-65\" (UID: \"23a77ea1939ce7330079bf948095bd9a\") " pod="kube-system/kube-controller-manager-172-238-161-65" Apr 24 23:34:42.654316 kubelet[2170]: I0424 23:34:42.653822 2170 kubelet_node_status.go:74] "Attempting to register node" node="172-238-161-65" Apr 24 23:34:42.654475 kubelet[2170]: E0424 23:34:42.654432 2170 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.238.161.65:6443/api/v1/nodes\": dial tcp 172.238.161.65:6443: connect: connection refused" node="172-238-161-65" Apr 24 23:34:42.729642 kubelet[2170]: E0424 23:34:42.729567 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:42.731216 containerd[1467]: time="2026-04-24T23:34:42.731154294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-238-161-65,Uid:7107e88a045fbdbe493cb46a69cd72d2,Namespace:kube-system,Attempt:0,}" Apr 24 23:34:42.736912 kubelet[2170]: E0424 23:34:42.736839 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:42.737389 containerd[1467]: time="2026-04-24T23:34:42.737352293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-238-161-65,Uid:23a77ea1939ce7330079bf948095bd9a,Namespace:kube-system,Attempt:0,}" Apr 24 23:34:42.745274 kubelet[2170]: E0424 23:34:42.745244 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:42.745885 containerd[1467]: time="2026-04-24T23:34:42.745837979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-238-161-65,Uid:95d044c9cb9e2b80c4195d77fce91dfc,Namespace:kube-system,Attempt:0,}" Apr 24 23:34:42.866018 kubelet[2170]: E0424 23:34:42.865964 2170 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.161.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-161-65?timeout=10s\": dial tcp 172.238.161.65:6443: connect: connection refused" interval="800ms" Apr 24 23:34:43.058264 kubelet[2170]: I0424 23:34:43.058141 2170 kubelet_node_status.go:74] "Attempting to register node" node="172-238-161-65" Apr 24 23:34:43.058483 kubelet[2170]: E0424 23:34:43.058446 2170 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.238.161.65:6443/api/v1/nodes\": dial tcp 172.238.161.65:6443: connect: connection refused" node="172-238-161-65" Apr 24 23:34:43.247310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2263271979.mount: Deactivated successfully. Apr 24 23:34:43.253280 containerd[1467]: time="2026-04-24T23:34:43.253238436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:34:43.254341 containerd[1467]: time="2026-04-24T23:34:43.254295175Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:34:43.255006 containerd[1467]: time="2026-04-24T23:34:43.254982530Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:34:43.255839 containerd[1467]: time="2026-04-24T23:34:43.255808860Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:34:43.256699 containerd[1467]: time="2026-04-24T23:34:43.256600077Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312062" Apr 24 23:34:43.257239 containerd[1467]: time="2026-04-24T23:34:43.257151682Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:34:43.257239 containerd[1467]: time="2026-04-24T23:34:43.257209282Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:34:43.262986 containerd[1467]: time="2026-04-24T23:34:43.262956180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:34:43.265122 containerd[1467]: time="2026-04-24T23:34:43.264546788Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 518.618743ms" Apr 24 23:34:43.266086 containerd[1467]: time="2026-04-24T23:34:43.266058826Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 528.640676ms" Apr 24 23:34:43.267534 containerd[1467]: time="2026-04-24T23:34:43.267378362Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 536.130711ms" Apr 24 23:34:43.365532 containerd[1467]: time="2026-04-24T23:34:43.365444833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:34:43.366804 containerd[1467]: time="2026-04-24T23:34:43.365512793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:34:43.366804 containerd[1467]: time="2026-04-24T23:34:43.365711343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:34:43.366804 containerd[1467]: time="2026-04-24T23:34:43.365851076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:34:43.374875 containerd[1467]: time="2026-04-24T23:34:43.373916400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:34:43.374875 containerd[1467]: time="2026-04-24T23:34:43.373974629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:34:43.374875 containerd[1467]: time="2026-04-24T23:34:43.373989880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:34:43.374875 containerd[1467]: time="2026-04-24T23:34:43.374230799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:34:43.381159 containerd[1467]: time="2026-04-24T23:34:43.380872057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:34:43.381159 containerd[1467]: time="2026-04-24T23:34:43.380929348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:34:43.381159 containerd[1467]: time="2026-04-24T23:34:43.380942622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:34:43.381159 containerd[1467]: time="2026-04-24T23:34:43.381014565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:34:43.402079 systemd[1]: Started cri-containerd-35be19471d210b4a95be4662803bce4f218509ed30f8b803a6e6d140b769609c.scope - libcontainer container 35be19471d210b4a95be4662803bce4f218509ed30f8b803a6e6d140b769609c. Apr 24 23:34:43.419828 systemd[1]: Started cri-containerd-77377a97a07c06e7b4e75ac21e29ec27fbc7fdc6628b98f21a7ceaea1b3a0242.scope - libcontainer container 77377a97a07c06e7b4e75ac21e29ec27fbc7fdc6628b98f21a7ceaea1b3a0242. Apr 24 23:34:43.425386 systemd[1]: Started cri-containerd-bf65a7cd62a3b7311f43347fb6713317417ced8c3e738e227851fbd55fc14498.scope - libcontainer container bf65a7cd62a3b7311f43347fb6713317417ced8c3e738e227851fbd55fc14498. Apr 24 23:34:43.489397 containerd[1467]: time="2026-04-24T23:34:43.487866402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-238-161-65,Uid:23a77ea1939ce7330079bf948095bd9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"35be19471d210b4a95be4662803bce4f218509ed30f8b803a6e6d140b769609c\"" Apr 24 23:34:43.493686 kubelet[2170]: E0424 23:34:43.492528 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:43.500480 containerd[1467]: time="2026-04-24T23:34:43.500415521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-238-161-65,Uid:7107e88a045fbdbe493cb46a69cd72d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"77377a97a07c06e7b4e75ac21e29ec27fbc7fdc6628b98f21a7ceaea1b3a0242\"" Apr 24 23:34:43.501170 kubelet[2170]: E0424 23:34:43.500902 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:43.505554 containerd[1467]: time="2026-04-24T23:34:43.505504598Z" level=info msg="CreateContainer within sandbox \"35be19471d210b4a95be4662803bce4f218509ed30f8b803a6e6d140b769609c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 24 23:34:43.506122 containerd[1467]: time="2026-04-24T23:34:43.506048078Z" level=info msg="CreateContainer within sandbox \"77377a97a07c06e7b4e75ac21e29ec27fbc7fdc6628b98f21a7ceaea1b3a0242\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 24 23:34:43.524314 containerd[1467]: time="2026-04-24T23:34:43.524273750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-238-161-65,Uid:95d044c9cb9e2b80c4195d77fce91dfc,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf65a7cd62a3b7311f43347fb6713317417ced8c3e738e227851fbd55fc14498\"" Apr 24 23:34:43.525903 kubelet[2170]: E0424 23:34:43.525633 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:43.529366 containerd[1467]: time="2026-04-24T23:34:43.529328034Z" level=info msg="CreateContainer within sandbox \"bf65a7cd62a3b7311f43347fb6713317417ced8c3e738e227851fbd55fc14498\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 24 23:34:43.529660 containerd[1467]: time="2026-04-24T23:34:43.529639448Z" level=info msg="CreateContainer within sandbox \"77377a97a07c06e7b4e75ac21e29ec27fbc7fdc6628b98f21a7ceaea1b3a0242\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c332d0a1c9cbb975f15ab3b90a91408047cd1ad2c72ebba62f782f8b3631f184\"" Apr 24 23:34:43.531701 containerd[1467]: time="2026-04-24T23:34:43.531384121Z" level=info msg="StartContainer for \"c332d0a1c9cbb975f15ab3b90a91408047cd1ad2c72ebba62f782f8b3631f184\"" Apr 24 23:34:43.532047 containerd[1467]: time="2026-04-24T23:34:43.531971678Z" level=info msg="CreateContainer within sandbox \"35be19471d210b4a95be4662803bce4f218509ed30f8b803a6e6d140b769609c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1284a0381a1006ee0560bb03353d53f8c05f7a43570edd5b1bb1c65fa4685b08\"" Apr 24 23:34:43.532707 containerd[1467]: time="2026-04-24T23:34:43.532522754Z" level=info msg="StartContainer for \"1284a0381a1006ee0560bb03353d53f8c05f7a43570edd5b1bb1c65fa4685b08\"" Apr 24 23:34:43.547646 containerd[1467]: time="2026-04-24T23:34:43.547583628Z" level=info msg="CreateContainer within sandbox \"bf65a7cd62a3b7311f43347fb6713317417ced8c3e738e227851fbd55fc14498\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"36b1fe1c0fa670a3346f2eabe91a5fa4710400801680963319d81fc3bac9916d\"" Apr 24 23:34:43.548276 containerd[1467]: time="2026-04-24T23:34:43.548252240Z" level=info msg="StartContainer for \"36b1fe1c0fa670a3346f2eabe91a5fa4710400801680963319d81fc3bac9916d\"" Apr 24 23:34:43.573930 systemd[1]: Started cri-containerd-c332d0a1c9cbb975f15ab3b90a91408047cd1ad2c72ebba62f782f8b3631f184.scope - libcontainer container c332d0a1c9cbb975f15ab3b90a91408047cd1ad2c72ebba62f782f8b3631f184. Apr 24 23:34:43.577206 systemd[1]: Started cri-containerd-1284a0381a1006ee0560bb03353d53f8c05f7a43570edd5b1bb1c65fa4685b08.scope - libcontainer container 1284a0381a1006ee0560bb03353d53f8c05f7a43570edd5b1bb1c65fa4685b08. Apr 24 23:34:43.595592 systemd[1]: Started cri-containerd-36b1fe1c0fa670a3346f2eabe91a5fa4710400801680963319d81fc3bac9916d.scope - libcontainer container 36b1fe1c0fa670a3346f2eabe91a5fa4710400801680963319d81fc3bac9916d. Apr 24 23:34:43.648217 containerd[1467]: time="2026-04-24T23:34:43.648092877Z" level=info msg="StartContainer for \"c332d0a1c9cbb975f15ab3b90a91408047cd1ad2c72ebba62f782f8b3631f184\" returns successfully" Apr 24 23:34:43.677628 kubelet[2170]: E0424 23:34:43.675634 2170 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.161.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-161-65?timeout=10s\": dial tcp 172.238.161.65:6443: connect: connection refused" interval="1.6s" Apr 24 23:34:43.690666 containerd[1467]: time="2026-04-24T23:34:43.690621168Z" level=info msg="StartContainer for \"36b1fe1c0fa670a3346f2eabe91a5fa4710400801680963319d81fc3bac9916d\" returns successfully" Apr 24 23:34:43.694875 containerd[1467]: time="2026-04-24T23:34:43.694845070Z" level=info msg="StartContainer for \"1284a0381a1006ee0560bb03353d53f8c05f7a43570edd5b1bb1c65fa4685b08\" returns successfully" Apr 24 23:34:43.862705 kubelet[2170]: I0424 23:34:43.862661 2170 kubelet_node_status.go:74] "Attempting to register node" node="172-238-161-65" Apr 24 23:34:44.309863 kubelet[2170]: E0424 23:34:44.309631 2170 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-161-65\" not found" node="172-238-161-65" Apr 24 23:34:44.309863 kubelet[2170]: E0424 23:34:44.309781 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:44.310490 kubelet[2170]: E0424 23:34:44.310453 2170 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-161-65\" not found" node="172-238-161-65" Apr 24 23:34:44.310991 kubelet[2170]: E0424 23:34:44.310976 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:44.311278 kubelet[2170]: E0424 23:34:44.310737 2170 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-161-65\" not found" node="172-238-161-65" Apr 24 23:34:44.311278 kubelet[2170]: E0424 23:34:44.311214 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:44.836554 kubelet[2170]: I0424 23:34:44.836493 2170 kubelet_node_status.go:77] "Successfully registered node" node="172-238-161-65" Apr 24 23:34:44.837213 kubelet[2170]: E0424 23:34:44.837060 2170 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"172-238-161-65\": node \"172-238-161-65\" not found" Apr 24 23:34:44.855369 kubelet[2170]: E0424 23:34:44.855326 2170 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-238-161-65\" not found" Apr 24 23:34:44.955842 kubelet[2170]: E0424 23:34:44.955788 2170 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-238-161-65\" not found" Apr 24 23:34:45.056119 kubelet[2170]: E0424 23:34:45.056095 2170 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-238-161-65\" not found" Apr 24 23:34:45.157418 kubelet[2170]: E0424 23:34:45.157345 2170 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-238-161-65\" not found" Apr 24 23:34:45.258240 kubelet[2170]: E0424 23:34:45.258177 2170 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-238-161-65\" not found" Apr 24 23:34:45.313262 kubelet[2170]: E0424 23:34:45.313210 2170 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-161-65\" not found" node="172-238-161-65" Apr 24 23:34:45.313709 kubelet[2170]: E0424 23:34:45.313686 2170 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-161-65\" not found" node="172-238-161-65" Apr 24 23:34:45.313814 kubelet[2170]: E0424 23:34:45.313793 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:45.315444 kubelet[2170]: E0424 23:34:45.314718 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:45.359478 kubelet[2170]: E0424 23:34:45.359416 2170 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-238-161-65\" not found" Apr 24 23:34:45.460155 kubelet[2170]: E0424 23:34:45.459988 2170 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-238-161-65\" not found" Apr 24 23:34:45.560645 kubelet[2170]: E0424 23:34:45.560579 2170 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-238-161-65\" not found" Apr 24 23:34:45.661554 kubelet[2170]: E0424 23:34:45.661497 2170 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-238-161-65\" not found" Apr 24 23:34:45.762854 kubelet[2170]: E0424 23:34:45.762450 2170 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-238-161-65\" not found" Apr 24 23:34:45.862915 kubelet[2170]: E0424 23:34:45.862862 2170 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-238-161-65\" not found" Apr 24 23:34:45.964306 kubelet[2170]: E0424 23:34:45.964166 2170 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-238-161-65\" not found" Apr 24 23:34:46.065883 kubelet[2170]: E0424 23:34:46.065143 2170 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-238-161-65\" not found" Apr 24 23:34:46.164363 kubelet[2170]: I0424 23:34:46.164309 2170 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-161-65" Apr 24 23:34:46.174188 kubelet[2170]: I0424 23:34:46.174140 2170 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-238-161-65" Apr 24 23:34:46.181253 kubelet[2170]: I0424 23:34:46.181220 2170 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-161-65" Apr 24 23:34:46.243696 kubelet[2170]: I0424 23:34:46.243630 2170 apiserver.go:52] "Watching apiserver" Apr 24 23:34:46.248113 kubelet[2170]: E0424 23:34:46.248065 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:46.264741 kubelet[2170]: I0424 23:34:46.264707 2170 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 24 23:34:46.312438 kubelet[2170]: E0424 23:34:46.312403 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:46.313121 kubelet[2170]: I0424 23:34:46.313081 2170 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-161-65" Apr 24 23:34:46.319822 kubelet[2170]: E0424 23:34:46.319634 2170 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-238-161-65\" already exists" pod="kube-system/kube-scheduler-172-238-161-65" Apr 24 23:34:46.320244 kubelet[2170]: E0424 23:34:46.320211 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:46.996517 systemd[1]: Reloading requested from client PID 2460 ('systemctl') (unit session-7.scope)... Apr 24 23:34:46.996537 systemd[1]: Reloading... Apr 24 23:34:47.137017 zram_generator::config[2503]: No configuration found. Apr 24 23:34:47.266359 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:34:47.314340 kubelet[2170]: E0424 23:34:47.313827 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:47.363311 systemd[1]: Reloading finished in 366 ms. Apr 24 23:34:47.422979 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:34:47.443961 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 23:34:47.444312 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:34:47.449994 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:34:47.640313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:34:47.656013 (kubelet)[2551]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:34:47.692931 kubelet[2551]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:34:47.704303 kubelet[2551]: I0424 23:34:47.704253 2551 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 24 23:34:47.704303 kubelet[2551]: I0424 23:34:47.704287 2551 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:34:47.704303 kubelet[2551]: I0424 23:34:47.704305 2551 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 24 23:34:47.704303 kubelet[2551]: I0424 23:34:47.704311 2551 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:34:47.704537 kubelet[2551]: I0424 23:34:47.704510 2551 server.go:951] "Client rotation is on, will bootstrap in background" Apr 24 23:34:47.705637 kubelet[2551]: I0424 23:34:47.705610 2551 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 24 23:34:47.707598 kubelet[2551]: I0424 23:34:47.707442 2551 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:34:47.713129 kubelet[2551]: E0424 23:34:47.713090 2551 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:34:47.713191 kubelet[2551]: I0424 23:34:47.713132 2551 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 24 23:34:47.717240 kubelet[2551]: I0424 23:34:47.717218 2551 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 24 23:34:47.717531 kubelet[2551]: I0424 23:34:47.717478 2551 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:34:47.717661 kubelet[2551]: I0424 23:34:47.717511 2551 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-238-161-65","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 23:34:47.717661 kubelet[2551]: I0424 23:34:47.717652 2551 topology_manager.go:143] "Creating topology manager with none policy" Apr 24 23:34:47.717661 kubelet[2551]: I0424 23:34:47.717661 2551 container_manager_linux.go:308] "Creating device plugin manager" Apr 24 23:34:47.717835 kubelet[2551]: I0424 23:34:47.717715 2551 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 24 23:34:47.717903 kubelet[2551]: I0424 23:34:47.717870 2551 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 24 23:34:47.718121 kubelet[2551]: I0424 23:34:47.718091 2551 kubelet.go:482] "Attempting to sync node with API server" Apr 24 23:34:47.718121 kubelet[2551]: I0424 23:34:47.718114 2551 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:34:47.718494 kubelet[2551]: I0424 23:34:47.718465 2551 kubelet.go:394] "Adding apiserver pod source" Apr 24 23:34:47.718494 kubelet[2551]: I0424 23:34:47.718487 2551 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:34:47.727935 kubelet[2551]: I0424 23:34:47.727887 2551 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:34:47.728545 kubelet[2551]: I0424 23:34:47.728503 2551 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:34:47.728545 kubelet[2551]: I0424 23:34:47.728534 2551 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 24 23:34:47.736946 kubelet[2551]: I0424 23:34:47.736840 2551 server.go:1257] "Started kubelet" Apr 24 23:34:47.740460 kubelet[2551]: I0424 23:34:47.740438 2551 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 24 23:34:47.745282 kubelet[2551]: I0424 23:34:47.744926 2551 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:34:47.745705 kubelet[2551]: I0424 23:34:47.745654 2551 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:34:47.746395 kubelet[2551]: I0424 23:34:47.746338 2551 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:34:47.746440 kubelet[2551]: I0424 23:34:47.746406 2551 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 24 23:34:47.746643 kubelet[2551]: I0424 23:34:47.746579 2551 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:34:47.748312 kubelet[2551]: I0424 23:34:47.748156 2551 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:34:47.751729 kubelet[2551]: I0424 23:34:47.751593 2551 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 24 23:34:47.752452 kubelet[2551]: I0424 23:34:47.752196 2551 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 24 23:34:47.752452 kubelet[2551]: I0424 23:34:47.752369 2551 reconciler.go:29] "Reconciler: start to sync state" Apr 24 23:34:47.755347 kubelet[2551]: I0424 23:34:47.753768 2551 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:34:47.755347 kubelet[2551]: I0424 23:34:47.753849 2551 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:34:47.757717 kubelet[2551]: I0424 23:34:47.757109 2551 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:34:47.760314 kubelet[2551]: E0424 23:34:47.760278 2551 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 23:34:47.774613 kubelet[2551]: I0424 23:34:47.773796 2551 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 24 23:34:47.781063 kubelet[2551]: I0424 23:34:47.781041 2551 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 24 23:34:47.781211 kubelet[2551]: I0424 23:34:47.781202 2551 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 24 23:34:47.781290 kubelet[2551]: I0424 23:34:47.781281 2551 kubelet.go:2501] "Starting kubelet main sync loop" Apr 24 23:34:47.781562 kubelet[2551]: E0424 23:34:47.781539 2551 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:34:47.819602 kubelet[2551]: I0424 23:34:47.819578 2551 cpu_manager.go:225] "Starting" policy="none" Apr 24 23:34:47.821180 kubelet[2551]: I0424 23:34:47.819855 2551 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 24 23:34:47.821298 kubelet[2551]: I0424 23:34:47.821287 2551 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 24 23:34:47.822035 kubelet[2551]: I0424 23:34:47.821793 2551 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 24 23:34:47.822250 kubelet[2551]: I0424 23:34:47.822219 2551 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 24 23:34:47.822531 kubelet[2551]: I0424 23:34:47.822520 2551 policy_none.go:50] "Start" Apr 24 23:34:47.822606 kubelet[2551]: I0424 23:34:47.822595 2551 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 24 23:34:47.822659 kubelet[2551]: I0424 23:34:47.822650 2551 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 24 23:34:47.823080 kubelet[2551]: I0424 23:34:47.822968 2551 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 24 23:34:47.823313 kubelet[2551]: I0424 23:34:47.823302 2551 policy_none.go:44] "Start" Apr 24 23:34:47.828180 kubelet[2551]: E0424 23:34:47.828145 2551 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:34:47.828859 kubelet[2551]: I0424 23:34:47.828351 2551 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 24 23:34:47.828859 kubelet[2551]: I0424 23:34:47.828372 2551 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:34:47.828859 kubelet[2551]: I0424 23:34:47.828647 2551 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 24 23:34:47.831718 kubelet[2551]: E0424 23:34:47.831178 2551 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:34:47.883634 kubelet[2551]: I0424 23:34:47.883168 2551 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-238-161-65" Apr 24 23:34:47.883634 kubelet[2551]: I0424 23:34:47.883287 2551 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-161-65" Apr 24 23:34:47.883634 kubelet[2551]: I0424 23:34:47.883532 2551 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-161-65" Apr 24 23:34:47.890486 kubelet[2551]: E0424 23:34:47.890387 2551 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-238-161-65\" already exists" pod="kube-system/kube-controller-manager-172-238-161-65" Apr 24 23:34:47.892408 kubelet[2551]: E0424 23:34:47.892253 2551 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-238-161-65\" already exists" pod="kube-system/kube-scheduler-172-238-161-65" Apr 24 23:34:47.892408 kubelet[2551]: E0424 23:34:47.892368 2551 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-238-161-65\" already exists" pod="kube-system/kube-apiserver-172-238-161-65" Apr 24 23:34:47.934611 kubelet[2551]: I0424 23:34:47.934565 2551 kubelet_node_status.go:74] "Attempting to register node" node="172-238-161-65" Apr 24 23:34:47.942542 kubelet[2551]: I0424 23:34:47.942486 2551 kubelet_node_status.go:123] "Node was previously registered" node="172-238-161-65" Apr 24 23:34:47.942695 kubelet[2551]: I0424 23:34:47.942592 2551 kubelet_node_status.go:77] "Successfully registered node" node="172-238-161-65" Apr 24 23:34:47.952562 kubelet[2551]: I0424 23:34:47.952525 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a77ea1939ce7330079bf948095bd9a-k8s-certs\") pod \"kube-controller-manager-172-238-161-65\" (UID: \"23a77ea1939ce7330079bf948095bd9a\") " pod="kube-system/kube-controller-manager-172-238-161-65" Apr 24 23:34:47.952562 kubelet[2551]: I0424 23:34:47.952552 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95d044c9cb9e2b80c4195d77fce91dfc-kubeconfig\") pod \"kube-scheduler-172-238-161-65\" (UID: \"95d044c9cb9e2b80c4195d77fce91dfc\") " pod="kube-system/kube-scheduler-172-238-161-65" Apr 24 23:34:47.952562 kubelet[2551]: I0424 23:34:47.952567 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7107e88a045fbdbe493cb46a69cd72d2-ca-certs\") pod \"kube-apiserver-172-238-161-65\" (UID: \"7107e88a045fbdbe493cb46a69cd72d2\") " pod="kube-system/kube-apiserver-172-238-161-65" Apr 24 23:34:47.952562 kubelet[2551]: I0424 23:34:47.952579 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7107e88a045fbdbe493cb46a69cd72d2-k8s-certs\") pod \"kube-apiserver-172-238-161-65\" (UID: \"7107e88a045fbdbe493cb46a69cd72d2\") " pod="kube-system/kube-apiserver-172-238-161-65" Apr 24 23:34:47.952840 kubelet[2551]: I0424 23:34:47.952593 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a77ea1939ce7330079bf948095bd9a-flexvolume-dir\") pod \"kube-controller-manager-172-238-161-65\" (UID: \"23a77ea1939ce7330079bf948095bd9a\") " pod="kube-system/kube-controller-manager-172-238-161-65" Apr 24 23:34:47.952840 kubelet[2551]: I0424 23:34:47.952612 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a77ea1939ce7330079bf948095bd9a-kubeconfig\") pod \"kube-controller-manager-172-238-161-65\" (UID: \"23a77ea1939ce7330079bf948095bd9a\") " pod="kube-system/kube-controller-manager-172-238-161-65" Apr 24 23:34:47.952840 kubelet[2551]: I0424 23:34:47.952626 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a77ea1939ce7330079bf948095bd9a-usr-share-ca-certificates\") pod \"kube-controller-manager-172-238-161-65\" (UID: \"23a77ea1939ce7330079bf948095bd9a\") " pod="kube-system/kube-controller-manager-172-238-161-65" Apr 24 23:34:47.952840 kubelet[2551]: I0424 23:34:47.952640 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7107e88a045fbdbe493cb46a69cd72d2-usr-share-ca-certificates\") pod \"kube-apiserver-172-238-161-65\" (UID: \"7107e88a045fbdbe493cb46a69cd72d2\") " pod="kube-system/kube-apiserver-172-238-161-65" Apr 24 23:34:47.952840 kubelet[2551]: I0424 23:34:47.952653 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a77ea1939ce7330079bf948095bd9a-ca-certs\") pod \"kube-controller-manager-172-238-161-65\" (UID: \"23a77ea1939ce7330079bf948095bd9a\") " pod="kube-system/kube-controller-manager-172-238-161-65" Apr 24 23:34:48.192202 kubelet[2551]: E0424 23:34:48.191816 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:48.192706 kubelet[2551]: E0424 23:34:48.192464 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:48.192706 kubelet[2551]: E0424 23:34:48.192565 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:48.720278 kubelet[2551]: I0424 23:34:48.719980 2551 apiserver.go:52] "Watching apiserver" Apr 24 23:34:48.752752 kubelet[2551]: I0424 23:34:48.752680 2551 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 24 23:34:48.804536 kubelet[2551]: E0424 23:34:48.804495 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:48.805385 kubelet[2551]: I0424 23:34:48.805370 2551 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-238-161-65" Apr 24 23:34:48.807799 kubelet[2551]: I0424 23:34:48.807758 2551 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-161-65" Apr 24 23:34:48.812016 kubelet[2551]: E0424 23:34:48.811987 2551 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-238-161-65\" already exists" pod="kube-system/kube-controller-manager-172-238-161-65" Apr 24 23:34:48.812132 kubelet[2551]: E0424 23:34:48.812111 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:48.813734 kubelet[2551]: E0424 23:34:48.813712 2551 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-238-161-65\" already exists" pod="kube-system/kube-apiserver-172-238-161-65" Apr 24 23:34:48.813913 kubelet[2551]: E0424 23:34:48.813893 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:48.829690 kubelet[2551]: I0424 23:34:48.829616 2551 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-238-161-65" podStartSLOduration=2.829604999 podStartE2EDuration="2.829604999s" podCreationTimestamp="2026-04-24 23:34:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:34:48.828969932 +0000 UTC m=+1.167166464" watchObservedRunningTime="2026-04-24 23:34:48.829604999 +0000 UTC m=+1.167801521" Apr 24 23:34:48.835427 kubelet[2551]: I0424 23:34:48.835246 2551 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-238-161-65" podStartSLOduration=2.835213373 podStartE2EDuration="2.835213373s" podCreationTimestamp="2026-04-24 23:34:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:34:48.835043389 +0000 UTC m=+1.173239911" watchObservedRunningTime="2026-04-24 23:34:48.835213373 +0000 UTC m=+1.173409905" Apr 24 23:34:48.842157 kubelet[2551]: I0424 23:34:48.841982 2551 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-238-161-65" podStartSLOduration=2.841971929 podStartE2EDuration="2.841971929s" podCreationTimestamp="2026-04-24 23:34:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:34:48.841876213 +0000 UTC m=+1.180072755" watchObservedRunningTime="2026-04-24 23:34:48.841971929 +0000 UTC m=+1.180168461" Apr 24 23:34:49.805592 kubelet[2551]: E0424 23:34:49.805535 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:49.806417 kubelet[2551]: E0424 23:34:49.806241 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:49.806492 kubelet[2551]: E0424 23:34:49.806466 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:50.096303 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 24 23:34:50.807884 kubelet[2551]: E0424 23:34:50.807790 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:50.808463 kubelet[2551]: E0424 23:34:50.808440 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:52.907327 kubelet[2551]: I0424 23:34:52.907280 2551 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 24 23:34:52.907910 kubelet[2551]: I0424 23:34:52.907835 2551 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 24 23:34:52.907942 containerd[1467]: time="2026-04-24T23:34:52.907616172Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 24 23:34:53.844801 systemd[1]: Created slice kubepods-besteffort-poda460f34a_e22f_430f_880b_a432baa6c09e.slice - libcontainer container kubepods-besteffort-poda460f34a_e22f_430f_880b_a432baa6c09e.slice. Apr 24 23:34:53.891422 kubelet[2551]: I0424 23:34:53.891261 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a460f34a-e22f-430f-880b-a432baa6c09e-lib-modules\") pod \"kube-proxy-kprm7\" (UID: \"a460f34a-e22f-430f-880b-a432baa6c09e\") " pod="kube-system/kube-proxy-kprm7" Apr 24 23:34:53.891422 kubelet[2551]: I0424 23:34:53.891301 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8ldv\" (UniqueName: \"kubernetes.io/projected/a460f34a-e22f-430f-880b-a432baa6c09e-kube-api-access-p8ldv\") pod \"kube-proxy-kprm7\" (UID: \"a460f34a-e22f-430f-880b-a432baa6c09e\") " pod="kube-system/kube-proxy-kprm7" Apr 24 23:34:53.891422 kubelet[2551]: I0424 23:34:53.891335 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a460f34a-e22f-430f-880b-a432baa6c09e-kube-proxy\") pod \"kube-proxy-kprm7\" (UID: \"a460f34a-e22f-430f-880b-a432baa6c09e\") " pod="kube-system/kube-proxy-kprm7" Apr 24 23:34:53.891422 kubelet[2551]: I0424 23:34:53.891352 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a460f34a-e22f-430f-880b-a432baa6c09e-xtables-lock\") pod \"kube-proxy-kprm7\" (UID: \"a460f34a-e22f-430f-880b-a432baa6c09e\") " pod="kube-system/kube-proxy-kprm7" Apr 24 23:34:54.162020 kubelet[2551]: E0424 23:34:54.161290 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:54.163953 containerd[1467]: time="2026-04-24T23:34:54.163890882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kprm7,Uid:a460f34a-e22f-430f-880b-a432baa6c09e,Namespace:kube-system,Attempt:0,}" Apr 24 23:34:54.198075 containerd[1467]: time="2026-04-24T23:34:54.197484045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:34:54.198075 containerd[1467]: time="2026-04-24T23:34:54.197546677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:34:54.198075 containerd[1467]: time="2026-04-24T23:34:54.197560851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:34:54.198075 containerd[1467]: time="2026-04-24T23:34:54.197632729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:34:54.214347 systemd[1]: Created slice kubepods-besteffort-pod8b0ac406_43fb_45cf_bfdc_9e91beb35080.slice - libcontainer container kubepods-besteffort-pod8b0ac406_43fb_45cf_bfdc_9e91beb35080.slice. Apr 24 23:34:54.222851 systemd[1]: run-containerd-runc-k8s.io-2e65d9f4561e0cd3d1204226f052a027e2d8c8a99ecc22362909be3c76d44d6f-runc.lDudKE.mount: Deactivated successfully. Apr 24 23:34:54.233834 systemd[1]: Started cri-containerd-2e65d9f4561e0cd3d1204226f052a027e2d8c8a99ecc22362909be3c76d44d6f.scope - libcontainer container 2e65d9f4561e0cd3d1204226f052a027e2d8c8a99ecc22362909be3c76d44d6f. Apr 24 23:34:54.259231 containerd[1467]: time="2026-04-24T23:34:54.259181442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kprm7,Uid:a460f34a-e22f-430f-880b-a432baa6c09e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e65d9f4561e0cd3d1204226f052a027e2d8c8a99ecc22362909be3c76d44d6f\"" Apr 24 23:34:54.260234 kubelet[2551]: E0424 23:34:54.260214 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:54.267743 containerd[1467]: time="2026-04-24T23:34:54.266216909Z" level=info msg="CreateContainer within sandbox \"2e65d9f4561e0cd3d1204226f052a027e2d8c8a99ecc22362909be3c76d44d6f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 24 23:34:54.279340 containerd[1467]: time="2026-04-24T23:34:54.279292681Z" level=info msg="CreateContainer within sandbox \"2e65d9f4561e0cd3d1204226f052a027e2d8c8a99ecc22362909be3c76d44d6f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5a77d740233fef024336cd885d086cab0605458cf1de9112215649ad7f1fdbad\"" Apr 24 23:34:54.280106 containerd[1467]: time="2026-04-24T23:34:54.280076916Z" level=info msg="StartContainer for \"5a77d740233fef024336cd885d086cab0605458cf1de9112215649ad7f1fdbad\"" Apr 24 23:34:54.293812 kubelet[2551]: I0424 23:34:54.293740 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmpwh\" (UniqueName: \"kubernetes.io/projected/8b0ac406-43fb-45cf-bfdc-9e91beb35080-kube-api-access-gmpwh\") pod \"tigera-operator-6cf4cccc57-xwqv5\" (UID: \"8b0ac406-43fb-45cf-bfdc-9e91beb35080\") " pod="tigera-operator/tigera-operator-6cf4cccc57-xwqv5" Apr 24 23:34:54.293951 kubelet[2551]: I0424 23:34:54.293817 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8b0ac406-43fb-45cf-bfdc-9e91beb35080-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-xwqv5\" (UID: \"8b0ac406-43fb-45cf-bfdc-9e91beb35080\") " pod="tigera-operator/tigera-operator-6cf4cccc57-xwqv5" Apr 24 23:34:54.320326 systemd[1]: Started cri-containerd-5a77d740233fef024336cd885d086cab0605458cf1de9112215649ad7f1fdbad.scope - libcontainer container 5a77d740233fef024336cd885d086cab0605458cf1de9112215649ad7f1fdbad. Apr 24 23:34:54.358246 containerd[1467]: time="2026-04-24T23:34:54.358065391Z" level=info msg="StartContainer for \"5a77d740233fef024336cd885d086cab0605458cf1de9112215649ad7f1fdbad\" returns successfully" Apr 24 23:34:54.523203 containerd[1467]: time="2026-04-24T23:34:54.523065975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-xwqv5,Uid:8b0ac406-43fb-45cf-bfdc-9e91beb35080,Namespace:tigera-operator,Attempt:0,}" Apr 24 23:34:54.544060 containerd[1467]: time="2026-04-24T23:34:54.543949405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:34:54.544060 containerd[1467]: time="2026-04-24T23:34:54.544002981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:34:54.544060 containerd[1467]: time="2026-04-24T23:34:54.544017585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:34:54.544523 containerd[1467]: time="2026-04-24T23:34:54.544355886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:34:54.570756 systemd[1]: Started cri-containerd-a4bb272ded3fe18cb7c96c4a14aa7669d447484afaff212accad78143325c9d2.scope - libcontainer container a4bb272ded3fe18cb7c96c4a14aa7669d447484afaff212accad78143325c9d2. Apr 24 23:34:54.642281 containerd[1467]: time="2026-04-24T23:34:54.642245643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-xwqv5,Uid:8b0ac406-43fb-45cf-bfdc-9e91beb35080,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a4bb272ded3fe18cb7c96c4a14aa7669d447484afaff212accad78143325c9d2\"" Apr 24 23:34:54.646130 containerd[1467]: time="2026-04-24T23:34:54.645919198Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 24 23:34:54.818595 kubelet[2551]: E0424 23:34:54.818067 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:55.638961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2479589621.mount: Deactivated successfully. Apr 24 23:34:57.796936 kubelet[2551]: I0424 23:34:57.796247 2551 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-kprm7" podStartSLOduration=4.796236148 podStartE2EDuration="4.796236148s" podCreationTimestamp="2026-04-24 23:34:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:34:54.832067045 +0000 UTC m=+7.170263567" watchObservedRunningTime="2026-04-24 23:34:57.796236148 +0000 UTC m=+10.134432670" Apr 24 23:34:57.846709 containerd[1467]: time="2026-04-24T23:34:57.846103284Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:57.847174 containerd[1467]: time="2026-04-24T23:34:57.846804728Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 24 23:34:57.847718 containerd[1467]: time="2026-04-24T23:34:57.847534952Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:57.849215 containerd[1467]: time="2026-04-24T23:34:57.849180198Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:57.849982 containerd[1467]: time="2026-04-24T23:34:57.849879402Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 3.203933255s" Apr 24 23:34:57.849982 containerd[1467]: time="2026-04-24T23:34:57.849902915Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 24 23:34:57.854569 containerd[1467]: time="2026-04-24T23:34:57.854543229Z" level=info msg="CreateContainer within sandbox \"a4bb272ded3fe18cb7c96c4a14aa7669d447484afaff212accad78143325c9d2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 24 23:34:57.879014 containerd[1467]: time="2026-04-24T23:34:57.878793988Z" level=info msg="CreateContainer within sandbox \"a4bb272ded3fe18cb7c96c4a14aa7669d447484afaff212accad78143325c9d2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e3238866b5fc2a125876c883507132d58e33965e69535ba4e2e593c33d34a8c9\"" Apr 24 23:34:57.881752 containerd[1467]: time="2026-04-24T23:34:57.880303973Z" level=info msg="StartContainer for \"e3238866b5fc2a125876c883507132d58e33965e69535ba4e2e593c33d34a8c9\"" Apr 24 23:34:57.919011 systemd[1]: Started cri-containerd-e3238866b5fc2a125876c883507132d58e33965e69535ba4e2e593c33d34a8c9.scope - libcontainer container e3238866b5fc2a125876c883507132d58e33965e69535ba4e2e593c33d34a8c9. Apr 24 23:34:57.948176 containerd[1467]: time="2026-04-24T23:34:57.946816658Z" level=info msg="StartContainer for \"e3238866b5fc2a125876c883507132d58e33965e69535ba4e2e593c33d34a8c9\" returns successfully" Apr 24 23:34:57.960845 kubelet[2551]: E0424 23:34:57.960143 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:58.914142 kubelet[2551]: E0424 23:34:58.914098 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:34:58.925441 kubelet[2551]: I0424 23:34:58.925221 2551 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-xwqv5" podStartSLOduration=1.720005792 podStartE2EDuration="4.925208691s" podCreationTimestamp="2026-04-24 23:34:54 +0000 UTC" firstStartedPulling="2026-04-24 23:34:54.64536675 +0000 UTC m=+6.983563282" lastFinishedPulling="2026-04-24 23:34:57.850569659 +0000 UTC m=+10.188766181" observedRunningTime="2026-04-24 23:34:58.838885112 +0000 UTC m=+11.177081644" watchObservedRunningTime="2026-04-24 23:34:58.925208691 +0000 UTC m=+11.263405213" Apr 24 23:34:59.831154 kubelet[2551]: E0424 23:34:59.831081 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:35:00.062746 kubelet[2551]: E0424 23:35:00.062708 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:35:03.474830 sudo[1682]: pam_unix(sudo:session): session closed for user root Apr 24 23:35:03.581849 sshd[1679]: pam_unix(sshd:session): session closed for user core Apr 24 23:35:03.588437 systemd-logind[1447]: Session 7 logged out. Waiting for processes to exit. Apr 24 23:35:03.593415 systemd[1]: sshd@6-172.238.161.65:22-4.175.71.9:57920.service: Deactivated successfully. Apr 24 23:35:03.596262 systemd[1]: session-7.scope: Deactivated successfully. Apr 24 23:35:03.598895 systemd[1]: session-7.scope: Consumed 3.129s CPU time, 156.0M memory peak, 0B memory swap peak. Apr 24 23:35:03.600080 systemd-logind[1447]: Removed session 7. Apr 24 23:35:04.607829 update_engine[1448]: I20260424 23:35:04.607755 1448 update_attempter.cc:509] Updating boot flags... Apr 24 23:35:04.689911 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2964) Apr 24 23:35:05.632188 systemd[1]: Created slice kubepods-besteffort-pod5f00945a_6317_489f_ac15_f9f81342d309.slice - libcontainer container kubepods-besteffort-pod5f00945a_6317_489f_ac15_f9f81342d309.slice. Apr 24 23:35:05.667557 kubelet[2551]: I0424 23:35:05.667448 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5f00945a-6317-489f-ac15-f9f81342d309-typha-certs\") pod \"calico-typha-55c7b9bb76-rld7c\" (UID: \"5f00945a-6317-489f-ac15-f9f81342d309\") " pod="calico-system/calico-typha-55c7b9bb76-rld7c" Apr 24 23:35:05.667955 kubelet[2551]: I0424 23:35:05.667585 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhtwr\" (UniqueName: \"kubernetes.io/projected/5f00945a-6317-489f-ac15-f9f81342d309-kube-api-access-vhtwr\") pod \"calico-typha-55c7b9bb76-rld7c\" (UID: \"5f00945a-6317-489f-ac15-f9f81342d309\") " pod="calico-system/calico-typha-55c7b9bb76-rld7c" Apr 24 23:35:05.667955 kubelet[2551]: I0424 23:35:05.667605 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f00945a-6317-489f-ac15-f9f81342d309-tigera-ca-bundle\") pod \"calico-typha-55c7b9bb76-rld7c\" (UID: \"5f00945a-6317-489f-ac15-f9f81342d309\") " pod="calico-system/calico-typha-55c7b9bb76-rld7c" Apr 24 23:35:05.681067 systemd[1]: Created slice kubepods-besteffort-pod616a1788_6e53_4484_b026_d3ac7bbd3397.slice - libcontainer container kubepods-besteffort-pod616a1788_6e53_4484_b026_d3ac7bbd3397.slice. Apr 24 23:35:05.769408 kubelet[2551]: I0424 23:35:05.768735 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/616a1788-6e53-4484-b026-d3ac7bbd3397-lib-modules\") pod \"calico-node-jj48l\" (UID: \"616a1788-6e53-4484-b026-d3ac7bbd3397\") " pod="calico-system/calico-node-jj48l" Apr 24 23:35:05.769408 kubelet[2551]: I0424 23:35:05.768764 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/616a1788-6e53-4484-b026-d3ac7bbd3397-cni-bin-dir\") pod \"calico-node-jj48l\" (UID: \"616a1788-6e53-4484-b026-d3ac7bbd3397\") " pod="calico-system/calico-node-jj48l" Apr 24 23:35:05.769408 kubelet[2551]: I0424 23:35:05.768778 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/616a1788-6e53-4484-b026-d3ac7bbd3397-var-lib-calico\") pod \"calico-node-jj48l\" (UID: \"616a1788-6e53-4484-b026-d3ac7bbd3397\") " pod="calico-system/calico-node-jj48l" Apr 24 23:35:05.769408 kubelet[2551]: I0424 23:35:05.768791 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/616a1788-6e53-4484-b026-d3ac7bbd3397-flexvol-driver-host\") pod \"calico-node-jj48l\" (UID: \"616a1788-6e53-4484-b026-d3ac7bbd3397\") " pod="calico-system/calico-node-jj48l" Apr 24 23:35:05.769408 kubelet[2551]: I0424 23:35:05.768807 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/616a1788-6e53-4484-b026-d3ac7bbd3397-node-certs\") pod \"calico-node-jj48l\" (UID: \"616a1788-6e53-4484-b026-d3ac7bbd3397\") " pod="calico-system/calico-node-jj48l" Apr 24 23:35:05.769598 kubelet[2551]: I0424 23:35:05.768828 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/616a1788-6e53-4484-b026-d3ac7bbd3397-var-run-calico\") pod \"calico-node-jj48l\" (UID: \"616a1788-6e53-4484-b026-d3ac7bbd3397\") " pod="calico-system/calico-node-jj48l" Apr 24 23:35:05.769598 kubelet[2551]: I0424 23:35:05.768840 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/616a1788-6e53-4484-b026-d3ac7bbd3397-sys-fs\") pod \"calico-node-jj48l\" (UID: \"616a1788-6e53-4484-b026-d3ac7bbd3397\") " pod="calico-system/calico-node-jj48l" Apr 24 23:35:05.769598 kubelet[2551]: I0424 23:35:05.768854 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/616a1788-6e53-4484-b026-d3ac7bbd3397-xtables-lock\") pod \"calico-node-jj48l\" (UID: \"616a1788-6e53-4484-b026-d3ac7bbd3397\") " pod="calico-system/calico-node-jj48l" Apr 24 23:35:05.769598 kubelet[2551]: I0424 23:35:05.768877 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/616a1788-6e53-4484-b026-d3ac7bbd3397-bpffs\") pod \"calico-node-jj48l\" (UID: \"616a1788-6e53-4484-b026-d3ac7bbd3397\") " pod="calico-system/calico-node-jj48l" Apr 24 23:35:05.769598 kubelet[2551]: I0424 23:35:05.768890 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/616a1788-6e53-4484-b026-d3ac7bbd3397-cni-log-dir\") pod \"calico-node-jj48l\" (UID: \"616a1788-6e53-4484-b026-d3ac7bbd3397\") " pod="calico-system/calico-node-jj48l" Apr 24 23:35:05.769734 kubelet[2551]: I0424 23:35:05.768902 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/616a1788-6e53-4484-b026-d3ac7bbd3397-nodeproc\") pod \"calico-node-jj48l\" (UID: \"616a1788-6e53-4484-b026-d3ac7bbd3397\") " pod="calico-system/calico-node-jj48l" Apr 24 23:35:05.769734 kubelet[2551]: I0424 23:35:05.768915 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/616a1788-6e53-4484-b026-d3ac7bbd3397-policysync\") pod \"calico-node-jj48l\" (UID: \"616a1788-6e53-4484-b026-d3ac7bbd3397\") " pod="calico-system/calico-node-jj48l" Apr 24 23:35:05.769734 kubelet[2551]: I0424 23:35:05.768927 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8mpl\" (UniqueName: \"kubernetes.io/projected/616a1788-6e53-4484-b026-d3ac7bbd3397-kube-api-access-h8mpl\") pod \"calico-node-jj48l\" (UID: \"616a1788-6e53-4484-b026-d3ac7bbd3397\") " pod="calico-system/calico-node-jj48l" Apr 24 23:35:05.769734 kubelet[2551]: I0424 23:35:05.768941 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/616a1788-6e53-4484-b026-d3ac7bbd3397-cni-net-dir\") pod \"calico-node-jj48l\" (UID: \"616a1788-6e53-4484-b026-d3ac7bbd3397\") " pod="calico-system/calico-node-jj48l" Apr 24 23:35:05.769734 kubelet[2551]: I0424 23:35:05.768955 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/616a1788-6e53-4484-b026-d3ac7bbd3397-tigera-ca-bundle\") pod \"calico-node-jj48l\" (UID: \"616a1788-6e53-4484-b026-d3ac7bbd3397\") " pod="calico-system/calico-node-jj48l" Apr 24 23:35:05.792461 kubelet[2551]: E0424 23:35:05.791992 2551 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hqhqx" podUID="53bf7884-6c7b-4ee5-be4f-549901e455a2" Apr 24 23:35:05.869816 kubelet[2551]: I0424 23:35:05.869785 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jzj7\" (UniqueName: \"kubernetes.io/projected/53bf7884-6c7b-4ee5-be4f-549901e455a2-kube-api-access-6jzj7\") pod \"csi-node-driver-hqhqx\" (UID: \"53bf7884-6c7b-4ee5-be4f-549901e455a2\") " pod="calico-system/csi-node-driver-hqhqx" Apr 24 23:35:05.870386 kubelet[2551]: I0424 23:35:05.869917 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/53bf7884-6c7b-4ee5-be4f-549901e455a2-registration-dir\") pod \"csi-node-driver-hqhqx\" (UID: \"53bf7884-6c7b-4ee5-be4f-549901e455a2\") " pod="calico-system/csi-node-driver-hqhqx" Apr 24 23:35:05.871135 kubelet[2551]: I0424 23:35:05.870506 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53bf7884-6c7b-4ee5-be4f-549901e455a2-kubelet-dir\") pod \"csi-node-driver-hqhqx\" (UID: \"53bf7884-6c7b-4ee5-be4f-549901e455a2\") " pod="calico-system/csi-node-driver-hqhqx" Apr 24 23:35:05.871404 kubelet[2551]: I0424 23:35:05.870525 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/53bf7884-6c7b-4ee5-be4f-549901e455a2-socket-dir\") pod \"csi-node-driver-hqhqx\" (UID: \"53bf7884-6c7b-4ee5-be4f-549901e455a2\") " pod="calico-system/csi-node-driver-hqhqx" Apr 24 23:35:05.871444 kubelet[2551]: I0424 23:35:05.871399 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/53bf7884-6c7b-4ee5-be4f-549901e455a2-varrun\") pod \"csi-node-driver-hqhqx\" (UID: \"53bf7884-6c7b-4ee5-be4f-549901e455a2\") " pod="calico-system/csi-node-driver-hqhqx" Apr 24 23:35:05.874527 kubelet[2551]: E0424 23:35:05.874373 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:05.874527 kubelet[2551]: W0424 23:35:05.874387 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:05.874527 kubelet[2551]: E0424 23:35:05.874401 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:05.878301 kubelet[2551]: E0424 23:35:05.878173 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:05.878301 kubelet[2551]: W0424 23:35:05.878188 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:05.878301 kubelet[2551]: E0424 23:35:05.878202 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:05.881927 kubelet[2551]: E0424 23:35:05.881879 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:05.881927 kubelet[2551]: W0424 23:35:05.881890 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:05.881927 kubelet[2551]: E0424 23:35:05.881903 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:05.940187 kubelet[2551]: E0424 23:35:05.940055 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:35:05.940819 containerd[1467]: time="2026-04-24T23:35:05.940780053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55c7b9bb76-rld7c,Uid:5f00945a-6317-489f-ac15-f9f81342d309,Namespace:calico-system,Attempt:0,}" Apr 24 23:35:05.971982 kubelet[2551]: E0424 23:35:05.971944 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:05.972175 kubelet[2551]: W0424 23:35:05.972068 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:05.972175 kubelet[2551]: E0424 23:35:05.972089 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:05.972506 containerd[1467]: time="2026-04-24T23:35:05.972061707Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:35:05.972506 containerd[1467]: time="2026-04-24T23:35:05.972147213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:35:05.972506 containerd[1467]: time="2026-04-24T23:35:05.972162102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:35:05.972506 containerd[1467]: time="2026-04-24T23:35:05.972267906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:35:05.972710 kubelet[2551]: E0424 23:35:05.972634 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:05.972710 kubelet[2551]: W0424 23:35:05.972646 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:05.972710 kubelet[2551]: E0424 23:35:05.972656 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:05.972980 kubelet[2551]: E0424 23:35:05.972965 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:05.972980 kubelet[2551]: W0424 23:35:05.972978 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:05.973039 kubelet[2551]: E0424 23:35:05.972990 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:05.973318 kubelet[2551]: E0424 23:35:05.973306 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:05.973318 kubelet[2551]: W0424 23:35:05.973316 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:05.973374 kubelet[2551]: E0424 23:35:05.973325 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:05.973973 kubelet[2551]: E0424 23:35:05.973666 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:05.973973 kubelet[2551]: W0424 23:35:05.973886 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:05.973973 kubelet[2551]: E0424 23:35:05.973895 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:05.974304 kubelet[2551]: E0424 23:35:05.974231 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:05.974304 kubelet[2551]: W0424 23:35:05.974241 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:05.974304 kubelet[2551]: E0424 23:35:05.974249 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:05.974659 kubelet[2551]: E0424 23:35:05.974564 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:05.974659 kubelet[2551]: W0424 23:35:05.974574 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:05.974659 kubelet[2551]: E0424 23:35:05.974583 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:05.975164 kubelet[2551]: E0424 23:35:05.975085 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:05.975164 kubelet[2551]: W0424 23:35:05.975095 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:05.975164 kubelet[2551]: E0424 23:35:05.975104 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:05.975623 kubelet[2551]: E0424 23:35:05.975509 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:05.975623 kubelet[2551]: W0424 23:35:05.975519 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:05.975623 kubelet[2551]: E0424 23:35:05.975528 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:05.978144 kubelet[2551]: E0424 23:35:05.977801 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:05.978144 kubelet[2551]: W0424 23:35:05.977814 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:05.978144 kubelet[2551]: E0424 23:35:05.978020 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:05.978395 kubelet[2551]: E0424 23:35:05.978278 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:05.978395 kubelet[2551]: W0424 23:35:05.978288 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:05.978395 kubelet[2551]: E0424 23:35:05.978297 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:05.978615 kubelet[2551]: E0424 23:35:05.978507 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:05.978615 kubelet[2551]: W0424 23:35:05.978516 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:05.978615 kubelet[2551]: E0424 23:35:05.978526 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:05.978888 kubelet[2551]: E0424 23:35:05.978781 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:05.978888 kubelet[2551]: W0424 23:35:05.978791 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:05.978888 kubelet[2551]: E0424 23:35:05.978799 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:05.979086 kubelet[2551]: E0424 23:35:05.978991 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:05.979086 kubelet[2551]: W0424 23:35:05.979000 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:05.979086 kubelet[2551]: E0424 23:35:05.979008 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:05.979395 kubelet[2551]: E0424 23:35:05.979202 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:05.979395 kubelet[2551]: W0424 23:35:05.979212 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:05.979395 kubelet[2551]: E0424 23:35:05.979220 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:05.979643 kubelet[2551]: E0424 23:35:05.979532 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:05.979643 kubelet[2551]: W0424 23:35:05.979542 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:05.979643 kubelet[2551]: E0424 23:35:05.979551 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:05.979899 kubelet[2551]: E0424 23:35:05.979791 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:05.979899 kubelet[2551]: W0424 23:35:05.979801 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:05.979899 kubelet[2551]: E0424 23:35:05.979809 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:05.980027 kubelet[2551]: E0424 23:35:05.980017 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:05.980069 kubelet[2551]: W0424 23:35:05.980060 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:05.980115 kubelet[2551]: E0424 23:35:05.980106 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:05.980521 kubelet[2551]: E0424 23:35:05.980510 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:05.980570 kubelet[2551]: W0424 23:35:05.980561 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:05.980620 kubelet[2551]: E0424 23:35:05.980611 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:05.980869 kubelet[2551]: E0424 23:35:05.980858 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:05.980930 kubelet[2551]: W0424 23:35:05.980919 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:05.980976 kubelet[2551]: E0424 23:35:05.980967 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:05.981216 kubelet[2551]: E0424 23:35:05.981204 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:05.981270 kubelet[2551]: W0424 23:35:05.981260 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:05.981311 kubelet[2551]: E0424 23:35:05.981302 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:05.981616 kubelet[2551]: E0424 23:35:05.981604 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:05.981889 kubelet[2551]: W0424 23:35:05.981655 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:05.982030 kubelet[2551]: E0424 23:35:05.981947 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:05.982901 kubelet[2551]: E0424 23:35:05.982889 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:05.982959 kubelet[2551]: W0424 23:35:05.982948 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:05.983020 kubelet[2551]: E0424 23:35:05.983009 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:05.984315 kubelet[2551]: E0424 23:35:05.984218 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:05.984596 kubelet[2551]: W0424 23:35:05.984575 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:05.985208 kubelet[2551]: E0424 23:35:05.984758 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:05.987576 containerd[1467]: time="2026-04-24T23:35:05.986415822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jj48l,Uid:616a1788-6e53-4484-b026-d3ac7bbd3397,Namespace:calico-system,Attempt:0,}" Apr 24 23:35:05.988046 kubelet[2551]: E0424 23:35:05.988031 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:05.988140 kubelet[2551]: W0424 23:35:05.988124 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:05.988292 kubelet[2551]: E0424 23:35:05.988218 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:06.002440 kubelet[2551]: E0424 23:35:06.002423 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:06.002894 kubelet[2551]: W0424 23:35:06.002620 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:06.003518 kubelet[2551]: E0424 23:35:06.003497 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:06.015965 systemd[1]: Started cri-containerd-2db1d4e5f6445c44dc5c821808357e9276872a39bae945de3caaaac6f2b5cf95.scope - libcontainer container 2db1d4e5f6445c44dc5c821808357e9276872a39bae945de3caaaac6f2b5cf95. Apr 24 23:35:06.034399 containerd[1467]: time="2026-04-24T23:35:06.034325864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:35:06.034496 containerd[1467]: time="2026-04-24T23:35:06.034446698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:35:06.035345 containerd[1467]: time="2026-04-24T23:35:06.035188841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:35:06.035345 containerd[1467]: time="2026-04-24T23:35:06.035262858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:35:06.065852 systemd[1]: Started cri-containerd-44f891de4020bfd4f0c0445d4cf014a56d520a7d519c4a49d40d9d2622152253.scope - libcontainer container 44f891de4020bfd4f0c0445d4cf014a56d520a7d519c4a49d40d9d2622152253. Apr 24 23:35:06.085727 containerd[1467]: time="2026-04-24T23:35:06.085627772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55c7b9bb76-rld7c,Uid:5f00945a-6317-489f-ac15-f9f81342d309,Namespace:calico-system,Attempt:0,} returns sandbox id \"2db1d4e5f6445c44dc5c821808357e9276872a39bae945de3caaaac6f2b5cf95\"" Apr 24 23:35:06.087472 kubelet[2551]: E0424 23:35:06.086395 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:35:06.091065 containerd[1467]: time="2026-04-24T23:35:06.091036273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 24 23:35:06.119495 containerd[1467]: time="2026-04-24T23:35:06.119466828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jj48l,Uid:616a1788-6e53-4484-b026-d3ac7bbd3397,Namespace:calico-system,Attempt:0,} returns sandbox id \"44f891de4020bfd4f0c0445d4cf014a56d520a7d519c4a49d40d9d2622152253\"" Apr 24 23:35:06.853254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2941282820.mount: Deactivated successfully. Apr 24 23:35:07.525320 containerd[1467]: time="2026-04-24T23:35:07.525256927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:07.526446 containerd[1467]: time="2026-04-24T23:35:07.526284678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 24 23:35:07.528304 containerd[1467]: time="2026-04-24T23:35:07.527150748Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:07.530018 containerd[1467]: time="2026-04-24T23:35:07.529042079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:07.530018 containerd[1467]: time="2026-04-24T23:35:07.529748015Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.438678005s" Apr 24 23:35:07.530018 containerd[1467]: time="2026-04-24T23:35:07.529946446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 24 23:35:07.531267 containerd[1467]: time="2026-04-24T23:35:07.531248175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 24 23:35:07.546969 containerd[1467]: time="2026-04-24T23:35:07.546946416Z" level=info msg="CreateContainer within sandbox \"2db1d4e5f6445c44dc5c821808357e9276872a39bae945de3caaaac6f2b5cf95\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 24 23:35:07.556023 containerd[1467]: time="2026-04-24T23:35:07.555983741Z" level=info msg="CreateContainer within sandbox \"2db1d4e5f6445c44dc5c821808357e9276872a39bae945de3caaaac6f2b5cf95\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f0900dab986f4eefab9468eaedf82185a48213384ed16f77c2ba48b9f6d83085\"" Apr 24 23:35:07.556852 containerd[1467]: time="2026-04-24T23:35:07.556814552Z" level=info msg="StartContainer for \"f0900dab986f4eefab9468eaedf82185a48213384ed16f77c2ba48b9f6d83085\"" Apr 24 23:35:07.584980 systemd[1]: Started cri-containerd-f0900dab986f4eefab9468eaedf82185a48213384ed16f77c2ba48b9f6d83085.scope - libcontainer container f0900dab986f4eefab9468eaedf82185a48213384ed16f77c2ba48b9f6d83085. Apr 24 23:35:07.636021 containerd[1467]: time="2026-04-24T23:35:07.635974837Z" level=info msg="StartContainer for \"f0900dab986f4eefab9468eaedf82185a48213384ed16f77c2ba48b9f6d83085\" returns successfully" Apr 24 23:35:07.786192 kubelet[2551]: E0424 23:35:07.783329 2551 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hqhqx" podUID="53bf7884-6c7b-4ee5-be4f-549901e455a2" Apr 24 23:35:07.854305 kubelet[2551]: E0424 23:35:07.854261 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:35:07.877111 kubelet[2551]: E0424 23:35:07.877079 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.877111 kubelet[2551]: W0424 23:35:07.877098 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.877111 kubelet[2551]: E0424 23:35:07.877116 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.877396 kubelet[2551]: E0424 23:35:07.877374 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.877396 kubelet[2551]: W0424 23:35:07.877387 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.877396 kubelet[2551]: E0424 23:35:07.877396 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.877622 kubelet[2551]: E0424 23:35:07.877602 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.877622 kubelet[2551]: W0424 23:35:07.877615 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.877622 kubelet[2551]: E0424 23:35:07.877623 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.878085 kubelet[2551]: E0424 23:35:07.878069 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.878085 kubelet[2551]: W0424 23:35:07.878082 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.878085 kubelet[2551]: E0424 23:35:07.878091 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.878331 kubelet[2551]: E0424 23:35:07.878316 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.878331 kubelet[2551]: W0424 23:35:07.878328 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.878400 kubelet[2551]: E0424 23:35:07.878337 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.878550 kubelet[2551]: E0424 23:35:07.878534 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.878550 kubelet[2551]: W0424 23:35:07.878546 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.878550 kubelet[2551]: E0424 23:35:07.878554 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.878803 kubelet[2551]: E0424 23:35:07.878772 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.878803 kubelet[2551]: W0424 23:35:07.878784 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.878803 kubelet[2551]: E0424 23:35:07.878792 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.878999 kubelet[2551]: E0424 23:35:07.878985 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.878999 kubelet[2551]: W0424 23:35:07.878998 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.879059 kubelet[2551]: E0424 23:35:07.879006 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.879218 kubelet[2551]: E0424 23:35:07.879203 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.879218 kubelet[2551]: W0424 23:35:07.879215 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.879278 kubelet[2551]: E0424 23:35:07.879223 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.879453 kubelet[2551]: E0424 23:35:07.879437 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.879453 kubelet[2551]: W0424 23:35:07.879450 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.879520 kubelet[2551]: E0424 23:35:07.879458 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.879693 kubelet[2551]: E0424 23:35:07.879661 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.879693 kubelet[2551]: W0424 23:35:07.879688 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.879758 kubelet[2551]: E0424 23:35:07.879697 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.879902 kubelet[2551]: E0424 23:35:07.879888 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.879902 kubelet[2551]: W0424 23:35:07.879899 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.879956 kubelet[2551]: E0424 23:35:07.879907 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.880116 kubelet[2551]: E0424 23:35:07.880099 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.880116 kubelet[2551]: W0424 23:35:07.880111 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.880116 kubelet[2551]: E0424 23:35:07.880119 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.880345 kubelet[2551]: E0424 23:35:07.880329 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.880345 kubelet[2551]: W0424 23:35:07.880342 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.880434 kubelet[2551]: E0424 23:35:07.880350 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.880590 kubelet[2551]: E0424 23:35:07.880574 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.880590 kubelet[2551]: W0424 23:35:07.880586 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.880681 kubelet[2551]: E0424 23:35:07.880594 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.891024 kubelet[2551]: E0424 23:35:07.890990 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.891024 kubelet[2551]: W0424 23:35:07.891005 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.891024 kubelet[2551]: E0424 23:35:07.891014 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.891275 kubelet[2551]: E0424 23:35:07.891260 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.891275 kubelet[2551]: W0424 23:35:07.891272 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.891363 kubelet[2551]: E0424 23:35:07.891280 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.891603 kubelet[2551]: E0424 23:35:07.891586 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.891603 kubelet[2551]: W0424 23:35:07.891599 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.891654 kubelet[2551]: E0424 23:35:07.891607 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.891886 kubelet[2551]: E0424 23:35:07.891870 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.891886 kubelet[2551]: W0424 23:35:07.891883 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.891953 kubelet[2551]: E0424 23:35:07.891891 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.892156 kubelet[2551]: E0424 23:35:07.892141 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.892156 kubelet[2551]: W0424 23:35:07.892153 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.892204 kubelet[2551]: E0424 23:35:07.892162 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.892403 kubelet[2551]: E0424 23:35:07.892376 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.892403 kubelet[2551]: W0424 23:35:07.892386 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.892403 kubelet[2551]: E0424 23:35:07.892394 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.892743 kubelet[2551]: E0424 23:35:07.892726 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.892743 kubelet[2551]: W0424 23:35:07.892738 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.892743 kubelet[2551]: E0424 23:35:07.892746 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.893012 kubelet[2551]: E0424 23:35:07.892997 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.893012 kubelet[2551]: W0424 23:35:07.893011 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.893066 kubelet[2551]: E0424 23:35:07.893018 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.893260 kubelet[2551]: E0424 23:35:07.893244 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.893260 kubelet[2551]: W0424 23:35:07.893256 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.893324 kubelet[2551]: E0424 23:35:07.893265 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.895755 kubelet[2551]: E0424 23:35:07.895723 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.895755 kubelet[2551]: W0424 23:35:07.895739 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.895755 kubelet[2551]: E0424 23:35:07.895748 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.896030 kubelet[2551]: E0424 23:35:07.896004 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.896030 kubelet[2551]: W0424 23:35:07.896018 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.896030 kubelet[2551]: E0424 23:35:07.896027 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.896563 kubelet[2551]: E0424 23:35:07.896333 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.896563 kubelet[2551]: W0424 23:35:07.896346 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.896563 kubelet[2551]: E0424 23:35:07.896355 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.896663 kubelet[2551]: E0424 23:35:07.896573 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.896663 kubelet[2551]: W0424 23:35:07.896581 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.896663 kubelet[2551]: E0424 23:35:07.896589 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.897624 kubelet[2551]: E0424 23:35:07.897596 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.897624 kubelet[2551]: W0424 23:35:07.897614 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.897624 kubelet[2551]: E0424 23:35:07.897624 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.897877 kubelet[2551]: E0424 23:35:07.897843 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.897877 kubelet[2551]: W0424 23:35:07.897851 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.897877 kubelet[2551]: E0424 23:35:07.897860 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.898280 kubelet[2551]: E0424 23:35:07.898057 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.898280 kubelet[2551]: W0424 23:35:07.898067 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.898280 kubelet[2551]: E0424 23:35:07.898076 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.898861 kubelet[2551]: E0424 23:35:07.898705 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.898861 kubelet[2551]: W0424 23:35:07.898724 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.898861 kubelet[2551]: E0424 23:35:07.898742 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.899047 kubelet[2551]: E0424 23:35:07.899035 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.899097 kubelet[2551]: W0424 23:35:07.899087 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.899152 kubelet[2551]: E0424 23:35:07.899142 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.964956 kubelet[2551]: E0424 23:35:07.964917 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:35:07.975031 kubelet[2551]: I0424 23:35:07.974422 2551 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-55c7b9bb76-rld7c" podStartSLOduration=1.5326725589999999 podStartE2EDuration="2.974401705s" podCreationTimestamp="2026-04-24 23:35:05 +0000 UTC" firstStartedPulling="2026-04-24 23:35:06.089366966 +0000 UTC m=+18.427563498" lastFinishedPulling="2026-04-24 23:35:07.531096112 +0000 UTC m=+19.869292644" observedRunningTime="2026-04-24 23:35:07.86525827 +0000 UTC m=+20.203454792" watchObservedRunningTime="2026-04-24 23:35:07.974401705 +0000 UTC m=+20.312598227" Apr 24 23:35:07.981943 kubelet[2551]: E0424 23:35:07.981920 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.982045 kubelet[2551]: W0424 23:35:07.982029 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.982104 kubelet[2551]: E0424 23:35:07.982092 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.982431 kubelet[2551]: E0424 23:35:07.982420 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.982491 kubelet[2551]: W0424 23:35:07.982480 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.982541 kubelet[2551]: E0424 23:35:07.982531 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.983091 kubelet[2551]: E0424 23:35:07.983025 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.983091 kubelet[2551]: W0424 23:35:07.983035 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.983091 kubelet[2551]: E0424 23:35:07.983045 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.983392 kubelet[2551]: E0424 23:35:07.983381 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.983500 kubelet[2551]: W0424 23:35:07.983435 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.983500 kubelet[2551]: E0424 23:35:07.983447 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.983766 kubelet[2551]: E0424 23:35:07.983756 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.983860 kubelet[2551]: W0424 23:35:07.983808 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.983860 kubelet[2551]: E0424 23:35:07.983819 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.984256 kubelet[2551]: E0424 23:35:07.984158 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.984256 kubelet[2551]: W0424 23:35:07.984168 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.984256 kubelet[2551]: E0424 23:35:07.984176 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.984409 kubelet[2551]: E0424 23:35:07.984399 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.984455 kubelet[2551]: W0424 23:35:07.984446 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.984541 kubelet[2551]: E0424 23:35:07.984495 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.984824 kubelet[2551]: E0424 23:35:07.984814 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.984974 kubelet[2551]: W0424 23:35:07.984874 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.984974 kubelet[2551]: E0424 23:35:07.984886 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.985144 kubelet[2551]: E0424 23:35:07.985134 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.985260 kubelet[2551]: W0424 23:35:07.985178 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.985260 kubelet[2551]: E0424 23:35:07.985220 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.985725 kubelet[2551]: E0424 23:35:07.985712 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.985873 kubelet[2551]: W0424 23:35:07.985783 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.985873 kubelet[2551]: E0424 23:35:07.985796 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.986252 kubelet[2551]: E0424 23:35:07.986240 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.986352 kubelet[2551]: W0424 23:35:07.986295 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.986352 kubelet[2551]: E0424 23:35:07.986307 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.986907 kubelet[2551]: E0424 23:35:07.986583 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.986907 kubelet[2551]: W0424 23:35:07.986593 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.986907 kubelet[2551]: E0424 23:35:07.986601 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.987919 kubelet[2551]: E0424 23:35:07.987906 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.988108 kubelet[2551]: W0424 23:35:07.988018 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.988108 kubelet[2551]: E0424 23:35:07.988034 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.988534 kubelet[2551]: E0424 23:35:07.988400 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.988534 kubelet[2551]: W0424 23:35:07.988411 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.988534 kubelet[2551]: E0424 23:35:07.988421 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.989019 kubelet[2551]: E0424 23:35:07.988925 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.989019 kubelet[2551]: W0424 23:35:07.988936 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.989019 kubelet[2551]: E0424 23:35:07.988945 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.989531 kubelet[2551]: E0424 23:35:07.989408 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.989531 kubelet[2551]: W0424 23:35:07.989419 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.989531 kubelet[2551]: E0424 23:35:07.989428 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.990050 kubelet[2551]: E0424 23:35:07.989973 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.990050 kubelet[2551]: W0424 23:35:07.989984 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.990050 kubelet[2551]: E0424 23:35:07.989993 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.990626 kubelet[2551]: E0424 23:35:07.990526 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.990626 kubelet[2551]: W0424 23:35:07.990555 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.990626 kubelet[2551]: E0424 23:35:07.990563 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.990951 kubelet[2551]: E0424 23:35:07.990941 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.991052 kubelet[2551]: W0424 23:35:07.990999 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.991052 kubelet[2551]: E0424 23:35:07.991010 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.991402 kubelet[2551]: E0424 23:35:07.991299 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.991402 kubelet[2551]: W0424 23:35:07.991308 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.991402 kubelet[2551]: E0424 23:35:07.991317 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.991547 kubelet[2551]: E0424 23:35:07.991537 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.991592 kubelet[2551]: W0424 23:35:07.991583 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.991632 kubelet[2551]: E0424 23:35:07.991623 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.992025 kubelet[2551]: E0424 23:35:07.991930 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.992025 kubelet[2551]: W0424 23:35:07.991940 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.992025 kubelet[2551]: E0424 23:35:07.991948 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.992171 kubelet[2551]: E0424 23:35:07.992161 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.992217 kubelet[2551]: W0424 23:35:07.992208 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.992256 kubelet[2551]: E0424 23:35:07.992247 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.992618 kubelet[2551]: E0424 23:35:07.992515 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.992618 kubelet[2551]: W0424 23:35:07.992525 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.992618 kubelet[2551]: E0424 23:35:07.992533 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:07.992798 kubelet[2551]: E0424 23:35:07.992788 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:35:07.992842 kubelet[2551]: W0424 23:35:07.992833 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:35:07.992982 kubelet[2551]: E0424 23:35:07.992872 2551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:35:08.338572 containerd[1467]: time="2026-04-24T23:35:08.338534007Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:08.341177 containerd[1467]: time="2026-04-24T23:35:08.341131172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 24 23:35:08.343691 containerd[1467]: time="2026-04-24T23:35:08.342494011Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:08.345795 containerd[1467]: time="2026-04-24T23:35:08.345773875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:08.347238 containerd[1467]: time="2026-04-24T23:35:08.347210891Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 815.866551ms" Apr 24 23:35:08.347551 containerd[1467]: time="2026-04-24T23:35:08.347237320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 24 23:35:08.352091 containerd[1467]: time="2026-04-24T23:35:08.352069205Z" level=info msg="CreateContainer within sandbox \"44f891de4020bfd4f0c0445d4cf014a56d520a7d519c4a49d40d9d2622152253\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 24 23:35:08.364488 containerd[1467]: time="2026-04-24T23:35:08.364452684Z" level=info msg="CreateContainer within sandbox \"44f891de4020bfd4f0c0445d4cf014a56d520a7d519c4a49d40d9d2622152253\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"344b038ba6961afbcbdf7e7859bbbb789b5a1eb2b9de981377853f93fd252ae4\"" Apr 24 23:35:08.365129 containerd[1467]: time="2026-04-24T23:35:08.365065956Z" level=info msg="StartContainer for \"344b038ba6961afbcbdf7e7859bbbb789b5a1eb2b9de981377853f93fd252ae4\"" Apr 24 23:35:08.398324 systemd[1]: run-containerd-runc-k8s.io-344b038ba6961afbcbdf7e7859bbbb789b5a1eb2b9de981377853f93fd252ae4-runc.G5nSt2.mount: Deactivated successfully. Apr 24 23:35:08.405805 systemd[1]: Started cri-containerd-344b038ba6961afbcbdf7e7859bbbb789b5a1eb2b9de981377853f93fd252ae4.scope - libcontainer container 344b038ba6961afbcbdf7e7859bbbb789b5a1eb2b9de981377853f93fd252ae4. Apr 24 23:35:08.440353 containerd[1467]: time="2026-04-24T23:35:08.440316006Z" level=info msg="StartContainer for \"344b038ba6961afbcbdf7e7859bbbb789b5a1eb2b9de981377853f93fd252ae4\" returns successfully" Apr 24 23:35:08.458929 systemd[1]: cri-containerd-344b038ba6961afbcbdf7e7859bbbb789b5a1eb2b9de981377853f93fd252ae4.scope: Deactivated successfully. Apr 24 23:35:08.542163 containerd[1467]: time="2026-04-24T23:35:08.542104415Z" level=info msg="shim disconnected" id=344b038ba6961afbcbdf7e7859bbbb789b5a1eb2b9de981377853f93fd252ae4 namespace=k8s.io Apr 24 23:35:08.542163 containerd[1467]: time="2026-04-24T23:35:08.542159282Z" level=warning msg="cleaning up after shim disconnected" id=344b038ba6961afbcbdf7e7859bbbb789b5a1eb2b9de981377853f93fd252ae4 namespace=k8s.io Apr 24 23:35:08.542163 containerd[1467]: time="2026-04-24T23:35:08.542168422Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:35:08.783459 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-344b038ba6961afbcbdf7e7859bbbb789b5a1eb2b9de981377853f93fd252ae4-rootfs.mount: Deactivated successfully. Apr 24 23:35:08.857704 kubelet[2551]: I0424 23:35:08.856661 2551 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:35:08.857704 kubelet[2551]: E0424 23:35:08.857241 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:35:08.860143 containerd[1467]: time="2026-04-24T23:35:08.860111857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 24 23:35:09.783726 kubelet[2551]: E0424 23:35:09.783350 2551 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hqhqx" podUID="53bf7884-6c7b-4ee5-be4f-549901e455a2" Apr 24 23:35:11.783114 kubelet[2551]: E0424 23:35:11.783070 2551 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hqhqx" podUID="53bf7884-6c7b-4ee5-be4f-549901e455a2" Apr 24 23:35:13.182448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount312899519.mount: Deactivated successfully. Apr 24 23:35:13.221867 containerd[1467]: time="2026-04-24T23:35:13.221717631Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:13.223798 containerd[1467]: time="2026-04-24T23:35:13.223263658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 24 23:35:13.225413 containerd[1467]: time="2026-04-24T23:35:13.225374097Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:13.229959 containerd[1467]: time="2026-04-24T23:35:13.229932771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:13.232120 containerd[1467]: time="2026-04-24T23:35:13.232080868Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 4.371836957s" Apr 24 23:35:13.232185 containerd[1467]: time="2026-04-24T23:35:13.232128677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 24 23:35:13.240124 containerd[1467]: time="2026-04-24T23:35:13.240080296Z" level=info msg="CreateContainer within sandbox \"44f891de4020bfd4f0c0445d4cf014a56d520a7d519c4a49d40d9d2622152253\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 24 23:35:13.263424 containerd[1467]: time="2026-04-24T23:35:13.259511374Z" level=info msg="CreateContainer within sandbox \"44f891de4020bfd4f0c0445d4cf014a56d520a7d519c4a49d40d9d2622152253\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"286dcc84ba71f5480b7e1f6b5169d067daa8fe994b6bd1ca3c61f8ca5b2b7a4c\"" Apr 24 23:35:13.263424 containerd[1467]: time="2026-04-24T23:35:13.260909897Z" level=info msg="StartContainer for \"286dcc84ba71f5480b7e1f6b5169d067daa8fe994b6bd1ca3c61f8ca5b2b7a4c\"" Apr 24 23:35:13.310936 systemd[1]: Started cri-containerd-286dcc84ba71f5480b7e1f6b5169d067daa8fe994b6bd1ca3c61f8ca5b2b7a4c.scope - libcontainer container 286dcc84ba71f5480b7e1f6b5169d067daa8fe994b6bd1ca3c61f8ca5b2b7a4c. Apr 24 23:35:13.354144 containerd[1467]: time="2026-04-24T23:35:13.354022596Z" level=info msg="StartContainer for \"286dcc84ba71f5480b7e1f6b5169d067daa8fe994b6bd1ca3c61f8ca5b2b7a4c\" returns successfully" Apr 24 23:35:13.419424 systemd[1]: cri-containerd-286dcc84ba71f5480b7e1f6b5169d067daa8fe994b6bd1ca3c61f8ca5b2b7a4c.scope: Deactivated successfully. Apr 24 23:35:13.589474 containerd[1467]: time="2026-04-24T23:35:13.589235738Z" level=info msg="shim disconnected" id=286dcc84ba71f5480b7e1f6b5169d067daa8fe994b6bd1ca3c61f8ca5b2b7a4c namespace=k8s.io Apr 24 23:35:13.589474 containerd[1467]: time="2026-04-24T23:35:13.589310256Z" level=warning msg="cleaning up after shim disconnected" id=286dcc84ba71f5480b7e1f6b5169d067daa8fe994b6bd1ca3c61f8ca5b2b7a4c namespace=k8s.io Apr 24 23:35:13.589474 containerd[1467]: time="2026-04-24T23:35:13.589326035Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:35:13.783867 kubelet[2551]: E0424 23:35:13.783432 2551 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hqhqx" podUID="53bf7884-6c7b-4ee5-be4f-549901e455a2" Apr 24 23:35:13.869722 containerd[1467]: time="2026-04-24T23:35:13.869467337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 24 23:35:14.186095 systemd[1]: run-containerd-runc-k8s.io-286dcc84ba71f5480b7e1f6b5169d067daa8fe994b6bd1ca3c61f8ca5b2b7a4c-runc.ZcqIhk.mount: Deactivated successfully. Apr 24 23:35:14.186229 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-286dcc84ba71f5480b7e1f6b5169d067daa8fe994b6bd1ca3c61f8ca5b2b7a4c-rootfs.mount: Deactivated successfully. Apr 24 23:35:15.786147 kubelet[2551]: E0424 23:35:15.786056 2551 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hqhqx" podUID="53bf7884-6c7b-4ee5-be4f-549901e455a2" Apr 24 23:35:16.045891 containerd[1467]: time="2026-04-24T23:35:16.045761307Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:16.047126 containerd[1467]: time="2026-04-24T23:35:16.047088939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 24 23:35:16.047768 containerd[1467]: time="2026-04-24T23:35:16.047726780Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:16.049818 containerd[1467]: time="2026-04-24T23:35:16.049493748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:16.050229 containerd[1467]: time="2026-04-24T23:35:16.050202078Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 2.180687142s" Apr 24 23:35:16.050268 containerd[1467]: time="2026-04-24T23:35:16.050228397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 24 23:35:16.054186 containerd[1467]: time="2026-04-24T23:35:16.054163902Z" level=info msg="CreateContainer within sandbox \"44f891de4020bfd4f0c0445d4cf014a56d520a7d519c4a49d40d9d2622152253\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 24 23:35:16.071177 containerd[1467]: time="2026-04-24T23:35:16.071153896Z" level=info msg="CreateContainer within sandbox \"44f891de4020bfd4f0c0445d4cf014a56d520a7d519c4a49d40d9d2622152253\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"93b210b95d57eb07942b122075e2793c3f76d49b466e640a3e1921e4a37ec0aa\"" Apr 24 23:35:16.071747 containerd[1467]: time="2026-04-24T23:35:16.071725349Z" level=info msg="StartContainer for \"93b210b95d57eb07942b122075e2793c3f76d49b466e640a3e1921e4a37ec0aa\"" Apr 24 23:35:16.108795 systemd[1]: Started cri-containerd-93b210b95d57eb07942b122075e2793c3f76d49b466e640a3e1921e4a37ec0aa.scope - libcontainer container 93b210b95d57eb07942b122075e2793c3f76d49b466e640a3e1921e4a37ec0aa. Apr 24 23:35:16.139737 containerd[1467]: time="2026-04-24T23:35:16.139706964Z" level=info msg="StartContainer for \"93b210b95d57eb07942b122075e2793c3f76d49b466e640a3e1921e4a37ec0aa\" returns successfully" Apr 24 23:35:16.631934 containerd[1467]: time="2026-04-24T23:35:16.631895528Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 23:35:16.634552 systemd[1]: cri-containerd-93b210b95d57eb07942b122075e2793c3f76d49b466e640a3e1921e4a37ec0aa.scope: Deactivated successfully. Apr 24 23:35:16.655221 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93b210b95d57eb07942b122075e2793c3f76d49b466e640a3e1921e4a37ec0aa-rootfs.mount: Deactivated successfully. Apr 24 23:35:16.682236 containerd[1467]: time="2026-04-24T23:35:16.682161940Z" level=info msg="shim disconnected" id=93b210b95d57eb07942b122075e2793c3f76d49b466e640a3e1921e4a37ec0aa namespace=k8s.io Apr 24 23:35:16.682236 containerd[1467]: time="2026-04-24T23:35:16.682230488Z" level=warning msg="cleaning up after shim disconnected" id=93b210b95d57eb07942b122075e2793c3f76d49b466e640a3e1921e4a37ec0aa namespace=k8s.io Apr 24 23:35:16.682236 containerd[1467]: time="2026-04-24T23:35:16.682241077Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:35:16.709884 kubelet[2551]: I0424 23:35:16.709862 2551 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 24 23:35:16.755927 systemd[1]: Created slice kubepods-burstable-pod029f22b5_c034_4b60_bead_4fa97fdfc735.slice - libcontainer container kubepods-burstable-pod029f22b5_c034_4b60_bead_4fa97fdfc735.slice. Apr 24 23:35:16.766202 systemd[1]: Created slice kubepods-burstable-pod714bc81a_57d7_4165_96f9_6fce5ffe62a9.slice - libcontainer container kubepods-burstable-pod714bc81a_57d7_4165_96f9_6fce5ffe62a9.slice. Apr 24 23:35:16.779937 systemd[1]: Created slice kubepods-besteffort-podf6312e20_cf69_45d0_8185_eca763f065e1.slice - libcontainer container kubepods-besteffort-podf6312e20_cf69_45d0_8185_eca763f065e1.slice. Apr 24 23:35:16.787829 systemd[1]: Created slice kubepods-besteffort-podd831dedb_5364_4c11_9554_a57872730716.slice - libcontainer container kubepods-besteffort-podd831dedb_5364_4c11_9554_a57872730716.slice. Apr 24 23:35:16.796768 systemd[1]: Created slice kubepods-besteffort-poda78654c3_b12f_4de9_ae62_9fb13eb78d56.slice - libcontainer container kubepods-besteffort-poda78654c3_b12f_4de9_ae62_9fb13eb78d56.slice. Apr 24 23:35:16.803538 systemd[1]: Created slice kubepods-besteffort-pod8dcb464f_862f_44ec_a163_718ce414c666.slice - libcontainer container kubepods-besteffort-pod8dcb464f_862f_44ec_a163_718ce414c666.slice. Apr 24 23:35:16.811951 systemd[1]: Created slice kubepods-besteffort-podc5f4d09e_c860_46dc_87d1_e132c91058f2.slice - libcontainer container kubepods-besteffort-podc5f4d09e_c860_46dc_87d1_e132c91058f2.slice. Apr 24 23:35:16.863164 kubelet[2551]: I0424 23:35:16.863140 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/714bc81a-57d7-4165-96f9-6fce5ffe62a9-config-volume\") pod \"coredns-7d764666f9-8hxr9\" (UID: \"714bc81a-57d7-4165-96f9-6fce5ffe62a9\") " pod="kube-system/coredns-7d764666f9-8hxr9" Apr 24 23:35:16.863511 kubelet[2551]: I0424 23:35:16.863495 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/029f22b5-c034-4b60-bead-4fa97fdfc735-config-volume\") pod \"coredns-7d764666f9-lg98j\" (UID: \"029f22b5-c034-4b60-bead-4fa97fdfc735\") " pod="kube-system/coredns-7d764666f9-lg98j" Apr 24 23:35:16.863715 kubelet[2551]: I0424 23:35:16.863580 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn7dm\" (UniqueName: \"kubernetes.io/projected/d831dedb-5364-4c11-9554-a57872730716-kube-api-access-xn7dm\") pod \"calico-apiserver-ccd877f8d-72gcn\" (UID: \"d831dedb-5364-4c11-9554-a57872730716\") " pod="calico-system/calico-apiserver-ccd877f8d-72gcn" Apr 24 23:35:16.863715 kubelet[2551]: I0424 23:35:16.863608 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8dcb464f-862f-44ec-a163-718ce414c666-calico-apiserver-certs\") pod \"calico-apiserver-ccd877f8d-t49xs\" (UID: \"8dcb464f-862f-44ec-a163-718ce414c666\") " pod="calico-system/calico-apiserver-ccd877f8d-t49xs" Apr 24 23:35:16.863715 kubelet[2551]: I0424 23:35:16.863625 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gtbc\" (UniqueName: \"kubernetes.io/projected/8dcb464f-862f-44ec-a163-718ce414c666-kube-api-access-6gtbc\") pod \"calico-apiserver-ccd877f8d-t49xs\" (UID: \"8dcb464f-862f-44ec-a163-718ce414c666\") " pod="calico-system/calico-apiserver-ccd877f8d-t49xs" Apr 24 23:35:16.863715 kubelet[2551]: I0424 23:35:16.863640 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxjr4\" (UniqueName: \"kubernetes.io/projected/714bc81a-57d7-4165-96f9-6fce5ffe62a9-kube-api-access-bxjr4\") pod \"coredns-7d764666f9-8hxr9\" (UID: \"714bc81a-57d7-4165-96f9-6fce5ffe62a9\") " pod="kube-system/coredns-7d764666f9-8hxr9" Apr 24 23:35:16.863715 kubelet[2551]: I0424 23:35:16.863655 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbwv4\" (UniqueName: \"kubernetes.io/projected/029f22b5-c034-4b60-bead-4fa97fdfc735-kube-api-access-jbwv4\") pod \"coredns-7d764666f9-lg98j\" (UID: \"029f22b5-c034-4b60-bead-4fa97fdfc735\") " pod="kube-system/coredns-7d764666f9-lg98j" Apr 24 23:35:16.863895 kubelet[2551]: I0424 23:35:16.863692 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a78654c3-b12f-4de9-ae62-9fb13eb78d56-config\") pod \"goldmane-9f7667bb8-tqq6w\" (UID: \"a78654c3-b12f-4de9-ae62-9fb13eb78d56\") " pod="calico-system/goldmane-9f7667bb8-tqq6w" Apr 24 23:35:16.863895 kubelet[2551]: I0424 23:35:16.863721 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a78654c3-b12f-4de9-ae62-9fb13eb78d56-goldmane-key-pair\") pod \"goldmane-9f7667bb8-tqq6w\" (UID: \"a78654c3-b12f-4de9-ae62-9fb13eb78d56\") " pod="calico-system/goldmane-9f7667bb8-tqq6w" Apr 24 23:35:16.863895 kubelet[2551]: I0424 23:35:16.863755 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhw77\" (UniqueName: \"kubernetes.io/projected/a78654c3-b12f-4de9-ae62-9fb13eb78d56-kube-api-access-mhw77\") pod \"goldmane-9f7667bb8-tqq6w\" (UID: \"a78654c3-b12f-4de9-ae62-9fb13eb78d56\") " pod="calico-system/goldmane-9f7667bb8-tqq6w" Apr 24 23:35:16.863895 kubelet[2551]: I0424 23:35:16.863773 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jflp8\" (UniqueName: \"kubernetes.io/projected/f6312e20-cf69-45d0-8185-eca763f065e1-kube-api-access-jflp8\") pod \"calico-kube-controllers-78fd48d56d-w6wzs\" (UID: \"f6312e20-cf69-45d0-8185-eca763f065e1\") " pod="calico-system/calico-kube-controllers-78fd48d56d-w6wzs" Apr 24 23:35:16.863895 kubelet[2551]: I0424 23:35:16.863809 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a78654c3-b12f-4de9-ae62-9fb13eb78d56-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-tqq6w\" (UID: \"a78654c3-b12f-4de9-ae62-9fb13eb78d56\") " pod="calico-system/goldmane-9f7667bb8-tqq6w" Apr 24 23:35:16.864020 kubelet[2551]: I0424 23:35:16.863850 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6312e20-cf69-45d0-8185-eca763f065e1-tigera-ca-bundle\") pod \"calico-kube-controllers-78fd48d56d-w6wzs\" (UID: \"f6312e20-cf69-45d0-8185-eca763f065e1\") " pod="calico-system/calico-kube-controllers-78fd48d56d-w6wzs" Apr 24 23:35:16.864020 kubelet[2551]: I0424 23:35:16.863882 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d831dedb-5364-4c11-9554-a57872730716-calico-apiserver-certs\") pod \"calico-apiserver-ccd877f8d-72gcn\" (UID: \"d831dedb-5364-4c11-9554-a57872730716\") " pod="calico-system/calico-apiserver-ccd877f8d-72gcn" Apr 24 23:35:16.897560 containerd[1467]: time="2026-04-24T23:35:16.897170450Z" level=info msg="CreateContainer within sandbox \"44f891de4020bfd4f0c0445d4cf014a56d520a7d519c4a49d40d9d2622152253\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 24 23:35:16.912805 containerd[1467]: time="2026-04-24T23:35:16.912776694Z" level=info msg="CreateContainer within sandbox \"44f891de4020bfd4f0c0445d4cf014a56d520a7d519c4a49d40d9d2622152253\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9ac0f281975c24155283aaa4381177c841e90158b1d90cea59fe17482b63cb69\"" Apr 24 23:35:16.913578 containerd[1467]: time="2026-04-24T23:35:16.913560741Z" level=info msg="StartContainer for \"9ac0f281975c24155283aaa4381177c841e90158b1d90cea59fe17482b63cb69\"" Apr 24 23:35:16.959820 systemd[1]: Started cri-containerd-9ac0f281975c24155283aaa4381177c841e90158b1d90cea59fe17482b63cb69.scope - libcontainer container 9ac0f281975c24155283aaa4381177c841e90158b1d90cea59fe17482b63cb69. Apr 24 23:35:16.964811 kubelet[2551]: I0424 23:35:16.964774 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvlpx\" (UniqueName: \"kubernetes.io/projected/c5f4d09e-c860-46dc-87d1-e132c91058f2-kube-api-access-wvlpx\") pod \"whisker-74b6665b9d-d5ddg\" (UID: \"c5f4d09e-c860-46dc-87d1-e132c91058f2\") " pod="calico-system/whisker-74b6665b9d-d5ddg" Apr 24 23:35:16.964939 kubelet[2551]: I0424 23:35:16.964921 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c5f4d09e-c860-46dc-87d1-e132c91058f2-nginx-config\") pod \"whisker-74b6665b9d-d5ddg\" (UID: \"c5f4d09e-c860-46dc-87d1-e132c91058f2\") " pod="calico-system/whisker-74b6665b9d-d5ddg" Apr 24 23:35:16.964973 kubelet[2551]: I0424 23:35:16.964942 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c5f4d09e-c860-46dc-87d1-e132c91058f2-whisker-backend-key-pair\") pod \"whisker-74b6665b9d-d5ddg\" (UID: \"c5f4d09e-c860-46dc-87d1-e132c91058f2\") " pod="calico-system/whisker-74b6665b9d-d5ddg" Apr 24 23:35:16.965009 kubelet[2551]: I0424 23:35:16.964980 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5f4d09e-c860-46dc-87d1-e132c91058f2-whisker-ca-bundle\") pod \"whisker-74b6665b9d-d5ddg\" (UID: \"c5f4d09e-c860-46dc-87d1-e132c91058f2\") " pod="calico-system/whisker-74b6665b9d-d5ddg" Apr 24 23:35:17.007948 containerd[1467]: time="2026-04-24T23:35:17.007772070Z" level=info msg="StartContainer for \"9ac0f281975c24155283aaa4381177c841e90158b1d90cea59fe17482b63cb69\" returns successfully" Apr 24 23:35:17.073906 kubelet[2551]: E0424 23:35:17.073871 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:35:17.084048 containerd[1467]: time="2026-04-24T23:35:17.083631301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-lg98j,Uid:029f22b5-c034-4b60-bead-4fa97fdfc735,Namespace:kube-system,Attempt:0,}" Apr 24 23:35:17.090557 kubelet[2551]: E0424 23:35:17.090513 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:35:17.092732 containerd[1467]: time="2026-04-24T23:35:17.092445416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-8hxr9,Uid:714bc81a-57d7-4165-96f9-6fce5ffe62a9,Namespace:kube-system,Attempt:0,}" Apr 24 23:35:17.102890 containerd[1467]: time="2026-04-24T23:35:17.102848037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78fd48d56d-w6wzs,Uid:f6312e20-cf69-45d0-8185-eca763f065e1,Namespace:calico-system,Attempt:0,}" Apr 24 23:35:17.112294 containerd[1467]: time="2026-04-24T23:35:17.111986953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccd877f8d-72gcn,Uid:d831dedb-5364-4c11-9554-a57872730716,Namespace:calico-system,Attempt:0,}" Apr 24 23:35:17.113395 containerd[1467]: time="2026-04-24T23:35:17.113255358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-tqq6w,Uid:a78654c3-b12f-4de9-ae62-9fb13eb78d56,Namespace:calico-system,Attempt:0,}" Apr 24 23:35:17.114687 containerd[1467]: time="2026-04-24T23:35:17.114568732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccd877f8d-t49xs,Uid:8dcb464f-862f-44ec-a163-718ce414c666,Namespace:calico-system,Attempt:0,}" Apr 24 23:35:17.274192 containerd[1467]: time="2026-04-24T23:35:17.273529164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74b6665b9d-d5ddg,Uid:c5f4d09e-c860-46dc-87d1-e132c91058f2,Namespace:calico-system,Attempt:0,}" Apr 24 23:35:17.649185 systemd-networkd[1388]: cali92712000602: Link UP Apr 24 23:35:17.649971 systemd-networkd[1388]: cali92712000602: Gained carrier Apr 24 23:35:17.665390 containerd[1467]: 2026-04-24 23:35:17.278 [ERROR][3472] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:35:17.665390 containerd[1467]: 2026-04-24 23:35:17.326 [INFO][3472] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--161--65-k8s-coredns--7d764666f9--lg98j-eth0 coredns-7d764666f9- kube-system 029f22b5-c034-4b60-bead-4fa97fdfc735 880 0 2026-04-24 23:34:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-238-161-65 coredns-7d764666f9-lg98j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali92712000602 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="11fbfc9f9428028b67bb299a9a869fd20896363447bc640c31d058848d39cacd" Namespace="kube-system" Pod="coredns-7d764666f9-lg98j" WorkloadEndpoint="172--238--161--65-k8s-coredns--7d764666f9--lg98j-" Apr 24 23:35:17.665390 containerd[1467]: 2026-04-24 23:35:17.326 [INFO][3472] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="11fbfc9f9428028b67bb299a9a869fd20896363447bc640c31d058848d39cacd" Namespace="kube-system" Pod="coredns-7d764666f9-lg98j" WorkloadEndpoint="172--238--161--65-k8s-coredns--7d764666f9--lg98j-eth0" Apr 24 23:35:17.665390 containerd[1467]: 2026-04-24 23:35:17.502 [INFO][3553] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11fbfc9f9428028b67bb299a9a869fd20896363447bc640c31d058848d39cacd" HandleID="k8s-pod-network.11fbfc9f9428028b67bb299a9a869fd20896363447bc640c31d058848d39cacd" Workload="172--238--161--65-k8s-coredns--7d764666f9--lg98j-eth0" Apr 24 23:35:17.665390 containerd[1467]: 2026-04-24 23:35:17.525 [INFO][3553] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="11fbfc9f9428028b67bb299a9a869fd20896363447bc640c31d058848d39cacd" HandleID="k8s-pod-network.11fbfc9f9428028b67bb299a9a869fd20896363447bc640c31d058848d39cacd" Workload="172--238--161--65-k8s-coredns--7d764666f9--lg98j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e410), Attrs:map[string]string{"namespace":"kube-system", "node":"172-238-161-65", "pod":"coredns-7d764666f9-lg98j", "timestamp":"2026-04-24 23:35:17.502608447 +0000 UTC"}, Hostname:"172-238-161-65", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002aa000)} Apr 24 23:35:17.665390 containerd[1467]: 2026-04-24 23:35:17.525 [INFO][3553] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:35:17.665390 containerd[1467]: 2026-04-24 23:35:17.525 [INFO][3553] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:35:17.665390 containerd[1467]: 2026-04-24 23:35:17.525 [INFO][3553] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-161-65' Apr 24 23:35:17.665390 containerd[1467]: 2026-04-24 23:35:17.591 [INFO][3553] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.11fbfc9f9428028b67bb299a9a869fd20896363447bc640c31d058848d39cacd" host="172-238-161-65" Apr 24 23:35:17.665390 containerd[1467]: 2026-04-24 23:35:17.595 [INFO][3553] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-161-65" Apr 24 23:35:17.665390 containerd[1467]: 2026-04-24 23:35:17.602 [INFO][3553] ipam/ipam.go 526: Trying affinity for 192.168.40.0/26 host="172-238-161-65" Apr 24 23:35:17.665390 containerd[1467]: 2026-04-24 23:35:17.604 [INFO][3553] ipam/ipam.go 160: Attempting to load block cidr=192.168.40.0/26 host="172-238-161-65" Apr 24 23:35:17.665390 containerd[1467]: 2026-04-24 23:35:17.610 [INFO][3553] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="172-238-161-65" Apr 24 23:35:17.665390 containerd[1467]: 2026-04-24 23:35:17.610 [INFO][3553] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.11fbfc9f9428028b67bb299a9a869fd20896363447bc640c31d058848d39cacd" host="172-238-161-65" Apr 24 23:35:17.665390 containerd[1467]: 2026-04-24 23:35:17.613 [INFO][3553] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.11fbfc9f9428028b67bb299a9a869fd20896363447bc640c31d058848d39cacd Apr 24 23:35:17.665390 containerd[1467]: 2026-04-24 23:35:17.620 [INFO][3553] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.11fbfc9f9428028b67bb299a9a869fd20896363447bc640c31d058848d39cacd" host="172-238-161-65" Apr 24 23:35:17.665390 containerd[1467]: 2026-04-24 23:35:17.628 [INFO][3553] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.40.1/26] block=192.168.40.0/26 handle="k8s-pod-network.11fbfc9f9428028b67bb299a9a869fd20896363447bc640c31d058848d39cacd" host="172-238-161-65" Apr 24 23:35:17.665390 containerd[1467]: 2026-04-24 23:35:17.628 [INFO][3553] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.40.1/26] handle="k8s-pod-network.11fbfc9f9428028b67bb299a9a869fd20896363447bc640c31d058848d39cacd" host="172-238-161-65" Apr 24 23:35:17.665390 containerd[1467]: 2026-04-24 23:35:17.628 [INFO][3553] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:35:17.665390 containerd[1467]: 2026-04-24 23:35:17.628 [INFO][3553] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.40.1/26] IPv6=[] ContainerID="11fbfc9f9428028b67bb299a9a869fd20896363447bc640c31d058848d39cacd" HandleID="k8s-pod-network.11fbfc9f9428028b67bb299a9a869fd20896363447bc640c31d058848d39cacd" Workload="172--238--161--65-k8s-coredns--7d764666f9--lg98j-eth0" Apr 24 23:35:17.665950 containerd[1467]: 2026-04-24 23:35:17.636 [INFO][3472] cni-plugin/k8s.go 418: Populated endpoint ContainerID="11fbfc9f9428028b67bb299a9a869fd20896363447bc640c31d058848d39cacd" Namespace="kube-system" Pod="coredns-7d764666f9-lg98j" WorkloadEndpoint="172--238--161--65-k8s-coredns--7d764666f9--lg98j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--65-k8s-coredns--7d764666f9--lg98j-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"029f22b5-c034-4b60-bead-4fa97fdfc735", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 34, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-65", ContainerID:"", Pod:"coredns-7d764666f9-lg98j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali92712000602", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:35:17.665950 containerd[1467]: 2026-04-24 23:35:17.636 [INFO][3472] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.1/32] ContainerID="11fbfc9f9428028b67bb299a9a869fd20896363447bc640c31d058848d39cacd" Namespace="kube-system" Pod="coredns-7d764666f9-lg98j" WorkloadEndpoint="172--238--161--65-k8s-coredns--7d764666f9--lg98j-eth0" Apr 24 23:35:17.665950 containerd[1467]: 2026-04-24 23:35:17.636 [INFO][3472] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92712000602 ContainerID="11fbfc9f9428028b67bb299a9a869fd20896363447bc640c31d058848d39cacd" Namespace="kube-system" Pod="coredns-7d764666f9-lg98j" WorkloadEndpoint="172--238--161--65-k8s-coredns--7d764666f9--lg98j-eth0" Apr 24 23:35:17.665950 containerd[1467]: 2026-04-24 23:35:17.650 [INFO][3472] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11fbfc9f9428028b67bb299a9a869fd20896363447bc640c31d058848d39cacd" Namespace="kube-system" Pod="coredns-7d764666f9-lg98j" WorkloadEndpoint="172--238--161--65-k8s-coredns--7d764666f9--lg98j-eth0" Apr 24 23:35:17.665950 containerd[1467]: 2026-04-24 23:35:17.650 [INFO][3472] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="11fbfc9f9428028b67bb299a9a869fd20896363447bc640c31d058848d39cacd" Namespace="kube-system" Pod="coredns-7d764666f9-lg98j" WorkloadEndpoint="172--238--161--65-k8s-coredns--7d764666f9--lg98j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--65-k8s-coredns--7d764666f9--lg98j-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"029f22b5-c034-4b60-bead-4fa97fdfc735", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 34, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-65", ContainerID:"11fbfc9f9428028b67bb299a9a869fd20896363447bc640c31d058848d39cacd", Pod:"coredns-7d764666f9-lg98j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali92712000602", MAC:"fa:01:70:fc:0b:f8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:35:17.665950 containerd[1467]: 2026-04-24 23:35:17.660 [INFO][3472] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="11fbfc9f9428028b67bb299a9a869fd20896363447bc640c31d058848d39cacd" Namespace="kube-system" Pod="coredns-7d764666f9-lg98j" WorkloadEndpoint="172--238--161--65-k8s-coredns--7d764666f9--lg98j-eth0" Apr 24 23:35:17.687310 containerd[1467]: time="2026-04-24T23:35:17.687208547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:35:17.687310 containerd[1467]: time="2026-04-24T23:35:17.687267205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:35:17.687310 containerd[1467]: time="2026-04-24T23:35:17.687281355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:35:17.687728 containerd[1467]: time="2026-04-24T23:35:17.687349933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:35:17.710826 systemd[1]: Started cri-containerd-11fbfc9f9428028b67bb299a9a869fd20896363447bc640c31d058848d39cacd.scope - libcontainer container 11fbfc9f9428028b67bb299a9a869fd20896363447bc640c31d058848d39cacd. Apr 24 23:35:17.725627 systemd-networkd[1388]: cali9f73432d27f: Link UP Apr 24 23:35:17.726781 systemd-networkd[1388]: cali9f73432d27f: Gained carrier Apr 24 23:35:17.746413 containerd[1467]: 2026-04-24 23:35:17.415 [ERROR][3521] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:35:17.746413 containerd[1467]: 2026-04-24 23:35:17.488 [INFO][3521] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--161--65-k8s-goldmane--9f7667bb8--tqq6w-eth0 goldmane-9f7667bb8- calico-system a78654c3-b12f-4de9-ae62-9fb13eb78d56 891 0 2026-04-24 23:35:05 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-238-161-65 goldmane-9f7667bb8-tqq6w eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9f73432d27f [] [] }} ContainerID="96d306f9c80dd25958c96b3e171861e6f13cd833b38aa027f8e60b137c7e6066" Namespace="calico-system" Pod="goldmane-9f7667bb8-tqq6w" WorkloadEndpoint="172--238--161--65-k8s-goldmane--9f7667bb8--tqq6w-" Apr 24 23:35:17.746413 containerd[1467]: 2026-04-24 23:35:17.488 [INFO][3521] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96d306f9c80dd25958c96b3e171861e6f13cd833b38aa027f8e60b137c7e6066" Namespace="calico-system" Pod="goldmane-9f7667bb8-tqq6w" WorkloadEndpoint="172--238--161--65-k8s-goldmane--9f7667bb8--tqq6w-eth0" Apr 24 23:35:17.746413 containerd[1467]: 2026-04-24 23:35:17.614 [INFO][3589] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96d306f9c80dd25958c96b3e171861e6f13cd833b38aa027f8e60b137c7e6066" HandleID="k8s-pod-network.96d306f9c80dd25958c96b3e171861e6f13cd833b38aa027f8e60b137c7e6066" Workload="172--238--161--65-k8s-goldmane--9f7667bb8--tqq6w-eth0" Apr 24 23:35:17.746413 containerd[1467]: 2026-04-24 23:35:17.630 [INFO][3589] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="96d306f9c80dd25958c96b3e171861e6f13cd833b38aa027f8e60b137c7e6066" HandleID="k8s-pod-network.96d306f9c80dd25958c96b3e171861e6f13cd833b38aa027f8e60b137c7e6066" Workload="172--238--161--65-k8s-goldmane--9f7667bb8--tqq6w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fbae0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-161-65", "pod":"goldmane-9f7667bb8-tqq6w", "timestamp":"2026-04-24 23:35:17.614954655 +0000 UTC"}, Hostname:"172-238-161-65", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00030cdc0)} Apr 24 23:35:17.746413 containerd[1467]: 2026-04-24 23:35:17.630 [INFO][3589] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:35:17.746413 containerd[1467]: 2026-04-24 23:35:17.630 [INFO][3589] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:35:17.746413 containerd[1467]: 2026-04-24 23:35:17.630 [INFO][3589] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-161-65' Apr 24 23:35:17.746413 containerd[1467]: 2026-04-24 23:35:17.691 [INFO][3589] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.96d306f9c80dd25958c96b3e171861e6f13cd833b38aa027f8e60b137c7e6066" host="172-238-161-65" Apr 24 23:35:17.746413 containerd[1467]: 2026-04-24 23:35:17.695 [INFO][3589] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-161-65" Apr 24 23:35:17.746413 containerd[1467]: 2026-04-24 23:35:17.699 [INFO][3589] ipam/ipam.go 526: Trying affinity for 192.168.40.0/26 host="172-238-161-65" Apr 24 23:35:17.746413 containerd[1467]: 2026-04-24 23:35:17.701 [INFO][3589] ipam/ipam.go 160: Attempting to load block cidr=192.168.40.0/26 host="172-238-161-65" Apr 24 23:35:17.746413 containerd[1467]: 2026-04-24 23:35:17.703 [INFO][3589] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="172-238-161-65" Apr 24 23:35:17.746413 containerd[1467]: 2026-04-24 23:35:17.703 [INFO][3589] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.96d306f9c80dd25958c96b3e171861e6f13cd833b38aa027f8e60b137c7e6066" host="172-238-161-65" Apr 24 23:35:17.746413 containerd[1467]: 2026-04-24 23:35:17.705 [INFO][3589] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.96d306f9c80dd25958c96b3e171861e6f13cd833b38aa027f8e60b137c7e6066 Apr 24 23:35:17.746413 containerd[1467]: 2026-04-24 23:35:17.709 [INFO][3589] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.96d306f9c80dd25958c96b3e171861e6f13cd833b38aa027f8e60b137c7e6066" host="172-238-161-65" Apr 24 23:35:17.746413 containerd[1467]: 2026-04-24 23:35:17.717 [INFO][3589] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.40.2/26] block=192.168.40.0/26 handle="k8s-pod-network.96d306f9c80dd25958c96b3e171861e6f13cd833b38aa027f8e60b137c7e6066" host="172-238-161-65" Apr 24 23:35:17.746413 containerd[1467]: 2026-04-24 23:35:17.717 [INFO][3589] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.40.2/26] handle="k8s-pod-network.96d306f9c80dd25958c96b3e171861e6f13cd833b38aa027f8e60b137c7e6066" host="172-238-161-65" Apr 24 23:35:17.746413 containerd[1467]: 2026-04-24 23:35:17.717 [INFO][3589] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:35:17.746413 containerd[1467]: 2026-04-24 23:35:17.717 [INFO][3589] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.40.2/26] IPv6=[] ContainerID="96d306f9c80dd25958c96b3e171861e6f13cd833b38aa027f8e60b137c7e6066" HandleID="k8s-pod-network.96d306f9c80dd25958c96b3e171861e6f13cd833b38aa027f8e60b137c7e6066" Workload="172--238--161--65-k8s-goldmane--9f7667bb8--tqq6w-eth0" Apr 24 23:35:17.746927 containerd[1467]: 2026-04-24 23:35:17.720 [INFO][3521] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96d306f9c80dd25958c96b3e171861e6f13cd833b38aa027f8e60b137c7e6066" Namespace="calico-system" Pod="goldmane-9f7667bb8-tqq6w" WorkloadEndpoint="172--238--161--65-k8s-goldmane--9f7667bb8--tqq6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--65-k8s-goldmane--9f7667bb8--tqq6w-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"a78654c3-b12f-4de9-ae62-9fb13eb78d56", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 35, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-65", ContainerID:"", Pod:"goldmane-9f7667bb8-tqq6w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.40.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9f73432d27f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:35:17.746927 containerd[1467]: 2026-04-24 23:35:17.720 [INFO][3521] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.2/32] ContainerID="96d306f9c80dd25958c96b3e171861e6f13cd833b38aa027f8e60b137c7e6066" Namespace="calico-system" Pod="goldmane-9f7667bb8-tqq6w" WorkloadEndpoint="172--238--161--65-k8s-goldmane--9f7667bb8--tqq6w-eth0" Apr 24 23:35:17.746927 containerd[1467]: 2026-04-24 23:35:17.721 [INFO][3521] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f73432d27f ContainerID="96d306f9c80dd25958c96b3e171861e6f13cd833b38aa027f8e60b137c7e6066" Namespace="calico-system" Pod="goldmane-9f7667bb8-tqq6w" WorkloadEndpoint="172--238--161--65-k8s-goldmane--9f7667bb8--tqq6w-eth0" Apr 24 23:35:17.746927 containerd[1467]: 2026-04-24 23:35:17.729 [INFO][3521] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96d306f9c80dd25958c96b3e171861e6f13cd833b38aa027f8e60b137c7e6066" Namespace="calico-system" Pod="goldmane-9f7667bb8-tqq6w" WorkloadEndpoint="172--238--161--65-k8s-goldmane--9f7667bb8--tqq6w-eth0" Apr 24 23:35:17.746927 containerd[1467]: 2026-04-24 23:35:17.730 [INFO][3521] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96d306f9c80dd25958c96b3e171861e6f13cd833b38aa027f8e60b137c7e6066" Namespace="calico-system" Pod="goldmane-9f7667bb8-tqq6w" WorkloadEndpoint="172--238--161--65-k8s-goldmane--9f7667bb8--tqq6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--65-k8s-goldmane--9f7667bb8--tqq6w-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"a78654c3-b12f-4de9-ae62-9fb13eb78d56", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 35, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-65", ContainerID:"96d306f9c80dd25958c96b3e171861e6f13cd833b38aa027f8e60b137c7e6066", Pod:"goldmane-9f7667bb8-tqq6w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.40.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9f73432d27f", MAC:"a2:07:a6:a6:0a:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:35:17.746927 containerd[1467]: 2026-04-24 23:35:17.742 [INFO][3521] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96d306f9c80dd25958c96b3e171861e6f13cd833b38aa027f8e60b137c7e6066" Namespace="calico-system" Pod="goldmane-9f7667bb8-tqq6w" WorkloadEndpoint="172--238--161--65-k8s-goldmane--9f7667bb8--tqq6w-eth0" Apr 24 23:35:17.772004 containerd[1467]: time="2026-04-24T23:35:17.770365886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-lg98j,Uid:029f22b5-c034-4b60-bead-4fa97fdfc735,Namespace:kube-system,Attempt:0,} returns sandbox id \"11fbfc9f9428028b67bb299a9a869fd20896363447bc640c31d058848d39cacd\"" Apr 24 23:35:17.772072 kubelet[2551]: E0424 23:35:17.771605 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:35:17.780778 containerd[1467]: time="2026-04-24T23:35:17.780033297Z" level=info msg="CreateContainer within sandbox \"11fbfc9f9428028b67bb299a9a869fd20896363447bc640c31d058848d39cacd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:35:17.780864 containerd[1467]: time="2026-04-24T23:35:17.780197473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:35:17.785992 containerd[1467]: time="2026-04-24T23:35:17.783366185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:35:17.786717 containerd[1467]: time="2026-04-24T23:35:17.786232865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:35:17.791378 containerd[1467]: time="2026-04-24T23:35:17.790898245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:35:17.801567 systemd[1]: Created slice kubepods-besteffort-pod53bf7884_6c7b_4ee5_be4f_549901e455a2.slice - libcontainer container kubepods-besteffort-pod53bf7884_6c7b_4ee5_be4f_549901e455a2.slice. Apr 24 23:35:17.821180 containerd[1467]: time="2026-04-24T23:35:17.821110056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hqhqx,Uid:53bf7884-6c7b-4ee5-be4f-549901e455a2,Namespace:calico-system,Attempt:0,}" Apr 24 23:35:17.827826 systemd[1]: Started cri-containerd-96d306f9c80dd25958c96b3e171861e6f13cd833b38aa027f8e60b137c7e6066.scope - libcontainer container 96d306f9c80dd25958c96b3e171861e6f13cd833b38aa027f8e60b137c7e6066. Apr 24 23:35:17.840635 containerd[1467]: time="2026-04-24T23:35:17.838537701Z" level=info msg="CreateContainer within sandbox \"11fbfc9f9428028b67bb299a9a869fd20896363447bc640c31d058848d39cacd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6795d7008d5d32c2cfa39674fc9289e4f50a65343a04f2d33f230a0f7e936eea\"" Apr 24 23:35:17.842253 containerd[1467]: time="2026-04-24T23:35:17.841688124Z" level=info msg="StartContainer for \"6795d7008d5d32c2cfa39674fc9289e4f50a65343a04f2d33f230a0f7e936eea\"" Apr 24 23:35:17.845941 systemd-networkd[1388]: calid54d36e3e06: Link UP Apr 24 23:35:17.847183 systemd-networkd[1388]: calid54d36e3e06: Gained carrier Apr 24 23:35:17.877182 containerd[1467]: 2026-04-24 23:35:17.323 [ERROR][3481] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:35:17.877182 containerd[1467]: 2026-04-24 23:35:17.359 [INFO][3481] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--161--65-k8s-calico--apiserver--ccd877f8d--72gcn-eth0 calico-apiserver-ccd877f8d- calico-system d831dedb-5364-4c11-9554-a57872730716 890 0 2026-04-24 23:35:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:ccd877f8d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-238-161-65 calico-apiserver-ccd877f8d-72gcn eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calid54d36e3e06 [] [] }} ContainerID="92e7d3d645c79e667e62dc5815486898c8c96fa57cfa4232a1824a4958f7fa5b" Namespace="calico-system" Pod="calico-apiserver-ccd877f8d-72gcn" WorkloadEndpoint="172--238--161--65-k8s-calico--apiserver--ccd877f8d--72gcn-" Apr 24 23:35:17.877182 containerd[1467]: 2026-04-24 23:35:17.359 [INFO][3481] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="92e7d3d645c79e667e62dc5815486898c8c96fa57cfa4232a1824a4958f7fa5b" Namespace="calico-system" Pod="calico-apiserver-ccd877f8d-72gcn" WorkloadEndpoint="172--238--161--65-k8s-calico--apiserver--ccd877f8d--72gcn-eth0" Apr 24 23:35:17.877182 containerd[1467]: 2026-04-24 23:35:17.519 [INFO][3562] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="92e7d3d645c79e667e62dc5815486898c8c96fa57cfa4232a1824a4958f7fa5b" HandleID="k8s-pod-network.92e7d3d645c79e667e62dc5815486898c8c96fa57cfa4232a1824a4958f7fa5b" Workload="172--238--161--65-k8s-calico--apiserver--ccd877f8d--72gcn-eth0" Apr 24 23:35:17.877182 containerd[1467]: 2026-04-24 23:35:17.548 [INFO][3562] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="92e7d3d645c79e667e62dc5815486898c8c96fa57cfa4232a1824a4958f7fa5b" HandleID="k8s-pod-network.92e7d3d645c79e667e62dc5815486898c8c96fa57cfa4232a1824a4958f7fa5b" Workload="172--238--161--65-k8s-calico--apiserver--ccd877f8d--72gcn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001224a0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-161-65", "pod":"calico-apiserver-ccd877f8d-72gcn", "timestamp":"2026-04-24 23:35:17.519334633 +0000 UTC"}, Hostname:"172-238-161-65", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002b6420)} Apr 24 23:35:17.877182 containerd[1467]: 2026-04-24 23:35:17.548 [INFO][3562] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:35:17.877182 containerd[1467]: 2026-04-24 23:35:17.717 [INFO][3562] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:35:17.877182 containerd[1467]: 2026-04-24 23:35:17.717 [INFO][3562] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-161-65' Apr 24 23:35:17.877182 containerd[1467]: 2026-04-24 23:35:17.793 [INFO][3562] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.92e7d3d645c79e667e62dc5815486898c8c96fa57cfa4232a1824a4958f7fa5b" host="172-238-161-65" Apr 24 23:35:17.877182 containerd[1467]: 2026-04-24 23:35:17.798 [INFO][3562] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-161-65" Apr 24 23:35:17.877182 containerd[1467]: 2026-04-24 23:35:17.804 [INFO][3562] ipam/ipam.go 526: Trying affinity for 192.168.40.0/26 host="172-238-161-65" Apr 24 23:35:17.877182 containerd[1467]: 2026-04-24 23:35:17.807 [INFO][3562] ipam/ipam.go 160: Attempting to load block cidr=192.168.40.0/26 host="172-238-161-65" Apr 24 23:35:17.877182 containerd[1467]: 2026-04-24 23:35:17.811 [INFO][3562] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="172-238-161-65" Apr 24 23:35:17.877182 containerd[1467]: 2026-04-24 23:35:17.811 [INFO][3562] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.92e7d3d645c79e667e62dc5815486898c8c96fa57cfa4232a1824a4958f7fa5b" host="172-238-161-65" Apr 24 23:35:17.877182 containerd[1467]: 2026-04-24 23:35:17.814 [INFO][3562] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.92e7d3d645c79e667e62dc5815486898c8c96fa57cfa4232a1824a4958f7fa5b Apr 24 23:35:17.877182 containerd[1467]: 2026-04-24 23:35:17.818 [INFO][3562] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.92e7d3d645c79e667e62dc5815486898c8c96fa57cfa4232a1824a4958f7fa5b" host="172-238-161-65" Apr 24 23:35:17.877182 containerd[1467]: 2026-04-24 23:35:17.826 [INFO][3562] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.40.3/26] block=192.168.40.0/26 handle="k8s-pod-network.92e7d3d645c79e667e62dc5815486898c8c96fa57cfa4232a1824a4958f7fa5b" host="172-238-161-65" Apr 24 23:35:17.877182 containerd[1467]: 2026-04-24 23:35:17.826 [INFO][3562] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.40.3/26] handle="k8s-pod-network.92e7d3d645c79e667e62dc5815486898c8c96fa57cfa4232a1824a4958f7fa5b" host="172-238-161-65" Apr 24 23:35:17.877182 containerd[1467]: 2026-04-24 23:35:17.826 [INFO][3562] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:35:17.877182 containerd[1467]: 2026-04-24 23:35:17.826 [INFO][3562] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.40.3/26] IPv6=[] ContainerID="92e7d3d645c79e667e62dc5815486898c8c96fa57cfa4232a1824a4958f7fa5b" HandleID="k8s-pod-network.92e7d3d645c79e667e62dc5815486898c8c96fa57cfa4232a1824a4958f7fa5b" Workload="172--238--161--65-k8s-calico--apiserver--ccd877f8d--72gcn-eth0" Apr 24 23:35:17.878043 containerd[1467]: 2026-04-24 23:35:17.836 [INFO][3481] cni-plugin/k8s.go 418: Populated endpoint ContainerID="92e7d3d645c79e667e62dc5815486898c8c96fa57cfa4232a1824a4958f7fa5b" Namespace="calico-system" Pod="calico-apiserver-ccd877f8d-72gcn" WorkloadEndpoint="172--238--161--65-k8s-calico--apiserver--ccd877f8d--72gcn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--65-k8s-calico--apiserver--ccd877f8d--72gcn-eth0", GenerateName:"calico-apiserver-ccd877f8d-", Namespace:"calico-system", SelfLink:"", UID:"d831dedb-5364-4c11-9554-a57872730716", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 35, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ccd877f8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-65", ContainerID:"", Pod:"calico-apiserver-ccd877f8d-72gcn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid54d36e3e06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:35:17.878043 containerd[1467]: 2026-04-24 23:35:17.837 [INFO][3481] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.3/32] ContainerID="92e7d3d645c79e667e62dc5815486898c8c96fa57cfa4232a1824a4958f7fa5b" Namespace="calico-system" Pod="calico-apiserver-ccd877f8d-72gcn" WorkloadEndpoint="172--238--161--65-k8s-calico--apiserver--ccd877f8d--72gcn-eth0" Apr 24 23:35:17.878043 containerd[1467]: 2026-04-24 23:35:17.837 [INFO][3481] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid54d36e3e06 ContainerID="92e7d3d645c79e667e62dc5815486898c8c96fa57cfa4232a1824a4958f7fa5b" Namespace="calico-system" Pod="calico-apiserver-ccd877f8d-72gcn" WorkloadEndpoint="172--238--161--65-k8s-calico--apiserver--ccd877f8d--72gcn-eth0" Apr 24 23:35:17.878043 containerd[1467]: 2026-04-24 23:35:17.847 [INFO][3481] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="92e7d3d645c79e667e62dc5815486898c8c96fa57cfa4232a1824a4958f7fa5b" Namespace="calico-system" Pod="calico-apiserver-ccd877f8d-72gcn" WorkloadEndpoint="172--238--161--65-k8s-calico--apiserver--ccd877f8d--72gcn-eth0" Apr 24 23:35:17.878043 containerd[1467]: 2026-04-24 23:35:17.848 [INFO][3481] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="92e7d3d645c79e667e62dc5815486898c8c96fa57cfa4232a1824a4958f7fa5b" Namespace="calico-system" Pod="calico-apiserver-ccd877f8d-72gcn" WorkloadEndpoint="172--238--161--65-k8s-calico--apiserver--ccd877f8d--72gcn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--65-k8s-calico--apiserver--ccd877f8d--72gcn-eth0", GenerateName:"calico-apiserver-ccd877f8d-", Namespace:"calico-system", SelfLink:"", UID:"d831dedb-5364-4c11-9554-a57872730716", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 35, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ccd877f8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-65", ContainerID:"92e7d3d645c79e667e62dc5815486898c8c96fa57cfa4232a1824a4958f7fa5b", Pod:"calico-apiserver-ccd877f8d-72gcn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid54d36e3e06", MAC:"02:ce:fd:12:eb:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:35:17.878043 containerd[1467]: 2026-04-24 23:35:17.869 [INFO][3481] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="92e7d3d645c79e667e62dc5815486898c8c96fa57cfa4232a1824a4958f7fa5b" Namespace="calico-system" Pod="calico-apiserver-ccd877f8d-72gcn" WorkloadEndpoint="172--238--161--65-k8s-calico--apiserver--ccd877f8d--72gcn-eth0" Apr 24 23:35:17.917761 kubelet[2551]: I0424 23:35:17.916934 2551 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-jj48l" podStartSLOduration=2.161002401 podStartE2EDuration="12.916915803s" podCreationTimestamp="2026-04-24 23:35:05 +0000 UTC" firstStartedPulling="2026-04-24 23:35:06.121551114 +0000 UTC m=+18.459747636" lastFinishedPulling="2026-04-24 23:35:16.877464516 +0000 UTC m=+29.215661038" observedRunningTime="2026-04-24 23:35:17.915177032 +0000 UTC m=+30.253373574" watchObservedRunningTime="2026-04-24 23:35:17.916915803 +0000 UTC m=+30.255112325" Apr 24 23:35:17.941811 systemd[1]: Started cri-containerd-6795d7008d5d32c2cfa39674fc9289e4f50a65343a04f2d33f230a0f7e936eea.scope - libcontainer container 6795d7008d5d32c2cfa39674fc9289e4f50a65343a04f2d33f230a0f7e936eea. Apr 24 23:35:17.966768 systemd-networkd[1388]: cali02b85110d06: Link UP Apr 24 23:35:17.966994 systemd-networkd[1388]: cali02b85110d06: Gained carrier Apr 24 23:35:17.974382 containerd[1467]: time="2026-04-24T23:35:17.974132673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:35:17.974382 containerd[1467]: time="2026-04-24T23:35:17.974183922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:35:17.974382 containerd[1467]: time="2026-04-24T23:35:17.974312318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:35:17.976075 containerd[1467]: time="2026-04-24T23:35:17.974798745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:35:17.992006 containerd[1467]: 2026-04-24 23:35:17.343 [ERROR][3485] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:35:17.992006 containerd[1467]: 2026-04-24 23:35:17.377 [INFO][3485] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--161--65-k8s-calico--apiserver--ccd877f8d--t49xs-eth0 calico-apiserver-ccd877f8d- calico-system 8dcb464f-862f-44ec-a163-718ce414c666 887 0 2026-04-24 23:35:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:ccd877f8d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-238-161-65 calico-apiserver-ccd877f8d-t49xs eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali02b85110d06 [] [] }} ContainerID="682f42fcd3b81a7627cc491e576b9a05c6d073ae741f608e61405c242837177f" Namespace="calico-system" Pod="calico-apiserver-ccd877f8d-t49xs" WorkloadEndpoint="172--238--161--65-k8s-calico--apiserver--ccd877f8d--t49xs-" Apr 24 23:35:17.992006 containerd[1467]: 2026-04-24 23:35:17.377 [INFO][3485] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="682f42fcd3b81a7627cc491e576b9a05c6d073ae741f608e61405c242837177f" Namespace="calico-system" Pod="calico-apiserver-ccd877f8d-t49xs" WorkloadEndpoint="172--238--161--65-k8s-calico--apiserver--ccd877f8d--t49xs-eth0" Apr 24 23:35:17.992006 containerd[1467]: 2026-04-24 23:35:17.550 [INFO][3568] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="682f42fcd3b81a7627cc491e576b9a05c6d073ae741f608e61405c242837177f" HandleID="k8s-pod-network.682f42fcd3b81a7627cc491e576b9a05c6d073ae741f608e61405c242837177f" Workload="172--238--161--65-k8s-calico--apiserver--ccd877f8d--t49xs-eth0" Apr 24 23:35:17.992006 containerd[1467]: 2026-04-24 23:35:17.576 [INFO][3568] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="682f42fcd3b81a7627cc491e576b9a05c6d073ae741f608e61405c242837177f" HandleID="k8s-pod-network.682f42fcd3b81a7627cc491e576b9a05c6d073ae741f608e61405c242837177f" Workload="172--238--161--65-k8s-calico--apiserver--ccd877f8d--t49xs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f8120), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-161-65", "pod":"calico-apiserver-ccd877f8d-t49xs", "timestamp":"2026-04-24 23:35:17.550491877 +0000 UTC"}, Hostname:"172-238-161-65", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000464580)} Apr 24 23:35:17.992006 containerd[1467]: 2026-04-24 23:35:17.576 [INFO][3568] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:35:17.992006 containerd[1467]: 2026-04-24 23:35:17.827 [INFO][3568] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:35:17.992006 containerd[1467]: 2026-04-24 23:35:17.827 [INFO][3568] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-161-65' Apr 24 23:35:17.992006 containerd[1467]: 2026-04-24 23:35:17.893 [INFO][3568] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.682f42fcd3b81a7627cc491e576b9a05c6d073ae741f608e61405c242837177f" host="172-238-161-65" Apr 24 23:35:17.992006 containerd[1467]: 2026-04-24 23:35:17.902 [INFO][3568] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-161-65" Apr 24 23:35:17.992006 containerd[1467]: 2026-04-24 23:35:17.912 [INFO][3568] ipam/ipam.go 526: Trying affinity for 192.168.40.0/26 host="172-238-161-65" Apr 24 23:35:17.992006 containerd[1467]: 2026-04-24 23:35:17.925 [INFO][3568] ipam/ipam.go 160: Attempting to load block cidr=192.168.40.0/26 host="172-238-161-65" Apr 24 23:35:17.992006 containerd[1467]: 2026-04-24 23:35:17.931 [INFO][3568] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="172-238-161-65" Apr 24 23:35:17.992006 containerd[1467]: 2026-04-24 23:35:17.931 [INFO][3568] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.682f42fcd3b81a7627cc491e576b9a05c6d073ae741f608e61405c242837177f" host="172-238-161-65" Apr 24 23:35:17.992006 containerd[1467]: 2026-04-24 23:35:17.934 [INFO][3568] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.682f42fcd3b81a7627cc491e576b9a05c6d073ae741f608e61405c242837177f Apr 24 23:35:17.992006 containerd[1467]: 2026-04-24 23:35:17.939 [INFO][3568] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.682f42fcd3b81a7627cc491e576b9a05c6d073ae741f608e61405c242837177f" host="172-238-161-65" Apr 24 23:35:17.992006 containerd[1467]: 2026-04-24 23:35:17.950 [INFO][3568] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.40.4/26] block=192.168.40.0/26 handle="k8s-pod-network.682f42fcd3b81a7627cc491e576b9a05c6d073ae741f608e61405c242837177f" host="172-238-161-65" Apr 24 23:35:17.992006 containerd[1467]: 2026-04-24 23:35:17.950 [INFO][3568] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.40.4/26] handle="k8s-pod-network.682f42fcd3b81a7627cc491e576b9a05c6d073ae741f608e61405c242837177f" host="172-238-161-65" Apr 24 23:35:17.992006 containerd[1467]: 2026-04-24 23:35:17.950 [INFO][3568] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:35:17.992006 containerd[1467]: 2026-04-24 23:35:17.950 [INFO][3568] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.40.4/26] IPv6=[] ContainerID="682f42fcd3b81a7627cc491e576b9a05c6d073ae741f608e61405c242837177f" HandleID="k8s-pod-network.682f42fcd3b81a7627cc491e576b9a05c6d073ae741f608e61405c242837177f" Workload="172--238--161--65-k8s-calico--apiserver--ccd877f8d--t49xs-eth0" Apr 24 23:35:17.993077 containerd[1467]: 2026-04-24 23:35:17.959 [INFO][3485] cni-plugin/k8s.go 418: Populated endpoint ContainerID="682f42fcd3b81a7627cc491e576b9a05c6d073ae741f608e61405c242837177f" Namespace="calico-system" Pod="calico-apiserver-ccd877f8d-t49xs" WorkloadEndpoint="172--238--161--65-k8s-calico--apiserver--ccd877f8d--t49xs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--65-k8s-calico--apiserver--ccd877f8d--t49xs-eth0", GenerateName:"calico-apiserver-ccd877f8d-", Namespace:"calico-system", SelfLink:"", UID:"8dcb464f-862f-44ec-a163-718ce414c666", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 35, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ccd877f8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-65", ContainerID:"", Pod:"calico-apiserver-ccd877f8d-t49xs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali02b85110d06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:35:17.993077 containerd[1467]: 2026-04-24 23:35:17.959 [INFO][3485] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.4/32] ContainerID="682f42fcd3b81a7627cc491e576b9a05c6d073ae741f608e61405c242837177f" Namespace="calico-system" Pod="calico-apiserver-ccd877f8d-t49xs" WorkloadEndpoint="172--238--161--65-k8s-calico--apiserver--ccd877f8d--t49xs-eth0" Apr 24 23:35:17.993077 containerd[1467]: 2026-04-24 23:35:17.959 [INFO][3485] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali02b85110d06 ContainerID="682f42fcd3b81a7627cc491e576b9a05c6d073ae741f608e61405c242837177f" Namespace="calico-system" Pod="calico-apiserver-ccd877f8d-t49xs" WorkloadEndpoint="172--238--161--65-k8s-calico--apiserver--ccd877f8d--t49xs-eth0" Apr 24 23:35:17.993077 containerd[1467]: 2026-04-24 23:35:17.965 [INFO][3485] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="682f42fcd3b81a7627cc491e576b9a05c6d073ae741f608e61405c242837177f" Namespace="calico-system" Pod="calico-apiserver-ccd877f8d-t49xs" WorkloadEndpoint="172--238--161--65-k8s-calico--apiserver--ccd877f8d--t49xs-eth0" Apr 24 23:35:17.993077 containerd[1467]: 2026-04-24 23:35:17.966 [INFO][3485] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="682f42fcd3b81a7627cc491e576b9a05c6d073ae741f608e61405c242837177f" Namespace="calico-system" Pod="calico-apiserver-ccd877f8d-t49xs" WorkloadEndpoint="172--238--161--65-k8s-calico--apiserver--ccd877f8d--t49xs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--65-k8s-calico--apiserver--ccd877f8d--t49xs-eth0", GenerateName:"calico-apiserver-ccd877f8d-", Namespace:"calico-system", SelfLink:"", UID:"8dcb464f-862f-44ec-a163-718ce414c666", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 35, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ccd877f8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-65", ContainerID:"682f42fcd3b81a7627cc491e576b9a05c6d073ae741f608e61405c242837177f", Pod:"calico-apiserver-ccd877f8d-t49xs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali02b85110d06", MAC:"1e:43:c2:51:6b:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:35:17.993077 containerd[1467]: 2026-04-24 23:35:17.985 [INFO][3485] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="682f42fcd3b81a7627cc491e576b9a05c6d073ae741f608e61405c242837177f" Namespace="calico-system" Pod="calico-apiserver-ccd877f8d-t49xs" WorkloadEndpoint="172--238--161--65-k8s-calico--apiserver--ccd877f8d--t49xs-eth0" Apr 24 23:35:18.033561 containerd[1467]: time="2026-04-24T23:35:18.033304542Z" level=info msg="StartContainer for \"6795d7008d5d32c2cfa39674fc9289e4f50a65343a04f2d33f230a0f7e936eea\" returns successfully" Apr 24 23:35:18.068383 containerd[1467]: time="2026-04-24T23:35:18.066770266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-tqq6w,Uid:a78654c3-b12f-4de9-ae62-9fb13eb78d56,Namespace:calico-system,Attempt:0,} returns sandbox id \"96d306f9c80dd25958c96b3e171861e6f13cd833b38aa027f8e60b137c7e6066\"" Apr 24 23:35:18.070023 containerd[1467]: time="2026-04-24T23:35:18.069995331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 24 23:35:18.104820 containerd[1467]: time="2026-04-24T23:35:18.103248391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:35:18.104820 containerd[1467]: time="2026-04-24T23:35:18.103295600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:35:18.104820 containerd[1467]: time="2026-04-24T23:35:18.103325019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:35:18.104820 containerd[1467]: time="2026-04-24T23:35:18.103399997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:35:18.115811 systemd[1]: Started cri-containerd-92e7d3d645c79e667e62dc5815486898c8c96fa57cfa4232a1824a4958f7fa5b.scope - libcontainer container 92e7d3d645c79e667e62dc5815486898c8c96fa57cfa4232a1824a4958f7fa5b. Apr 24 23:35:18.137820 systemd-networkd[1388]: cali9d0743194f8: Link UP Apr 24 23:35:18.143890 systemd-networkd[1388]: cali9d0743194f8: Gained carrier Apr 24 23:35:18.177814 systemd[1]: Started cri-containerd-682f42fcd3b81a7627cc491e576b9a05c6d073ae741f608e61405c242837177f.scope - libcontainer container 682f42fcd3b81a7627cc491e576b9a05c6d073ae741f608e61405c242837177f. Apr 24 23:35:18.186910 containerd[1467]: 2026-04-24 23:35:17.374 [ERROR][3498] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:35:18.186910 containerd[1467]: 2026-04-24 23:35:17.420 [INFO][3498] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--161--65-k8s-coredns--7d764666f9--8hxr9-eth0 coredns-7d764666f9- kube-system 714bc81a-57d7-4165-96f9-6fce5ffe62a9 886 0 2026-04-24 23:34:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-238-161-65 coredns-7d764666f9-8hxr9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9d0743194f8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="592858cfa825a8cb73c2b63b02d2e2e44fd43622bdbdc81ff9a278317512dacc" Namespace="kube-system" Pod="coredns-7d764666f9-8hxr9" WorkloadEndpoint="172--238--161--65-k8s-coredns--7d764666f9--8hxr9-" Apr 24 23:35:18.186910 containerd[1467]: 2026-04-24 23:35:17.420 [INFO][3498] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="592858cfa825a8cb73c2b63b02d2e2e44fd43622bdbdc81ff9a278317512dacc" Namespace="kube-system" Pod="coredns-7d764666f9-8hxr9" WorkloadEndpoint="172--238--161--65-k8s-coredns--7d764666f9--8hxr9-eth0" Apr 24 23:35:18.186910 containerd[1467]: 2026-04-24 23:35:17.582 [INFO][3578] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="592858cfa825a8cb73c2b63b02d2e2e44fd43622bdbdc81ff9a278317512dacc" HandleID="k8s-pod-network.592858cfa825a8cb73c2b63b02d2e2e44fd43622bdbdc81ff9a278317512dacc" Workload="172--238--161--65-k8s-coredns--7d764666f9--8hxr9-eth0" Apr 24 23:35:18.186910 containerd[1467]: 2026-04-24 23:35:17.595 [INFO][3578] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="592858cfa825a8cb73c2b63b02d2e2e44fd43622bdbdc81ff9a278317512dacc" HandleID="k8s-pod-network.592858cfa825a8cb73c2b63b02d2e2e44fd43622bdbdc81ff9a278317512dacc" Workload="172--238--161--65-k8s-coredns--7d764666f9--8hxr9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fbf00), Attrs:map[string]string{"namespace":"kube-system", "node":"172-238-161-65", "pod":"coredns-7d764666f9-8hxr9", "timestamp":"2026-04-24 23:35:17.582132917 +0000 UTC"}, Hostname:"172-238-161-65", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000189600)} Apr 24 23:35:18.186910 containerd[1467]: 2026-04-24 23:35:17.596 [INFO][3578] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:35:18.186910 containerd[1467]: 2026-04-24 23:35:17.950 [INFO][3578] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:35:18.186910 containerd[1467]: 2026-04-24 23:35:17.950 [INFO][3578] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-161-65' Apr 24 23:35:18.186910 containerd[1467]: 2026-04-24 23:35:17.993 [INFO][3578] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.592858cfa825a8cb73c2b63b02d2e2e44fd43622bdbdc81ff9a278317512dacc" host="172-238-161-65" Apr 24 23:35:18.186910 containerd[1467]: 2026-04-24 23:35:18.005 [INFO][3578] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-161-65" Apr 24 23:35:18.186910 containerd[1467]: 2026-04-24 23:35:18.028 [INFO][3578] ipam/ipam.go 526: Trying affinity for 192.168.40.0/26 host="172-238-161-65" Apr 24 23:35:18.186910 containerd[1467]: 2026-04-24 23:35:18.035 [INFO][3578] ipam/ipam.go 160: Attempting to load block cidr=192.168.40.0/26 host="172-238-161-65" Apr 24 23:35:18.186910 containerd[1467]: 2026-04-24 23:35:18.046 [INFO][3578] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="172-238-161-65" Apr 24 23:35:18.186910 containerd[1467]: 2026-04-24 23:35:18.046 [INFO][3578] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.592858cfa825a8cb73c2b63b02d2e2e44fd43622bdbdc81ff9a278317512dacc" host="172-238-161-65" Apr 24 23:35:18.186910 containerd[1467]: 2026-04-24 23:35:18.051 [INFO][3578] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.592858cfa825a8cb73c2b63b02d2e2e44fd43622bdbdc81ff9a278317512dacc Apr 24 23:35:18.186910 containerd[1467]: 2026-04-24 23:35:18.085 [INFO][3578] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.592858cfa825a8cb73c2b63b02d2e2e44fd43622bdbdc81ff9a278317512dacc" host="172-238-161-65" Apr 24 23:35:18.186910 containerd[1467]: 2026-04-24 23:35:18.112 [INFO][3578] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.40.5/26] block=192.168.40.0/26 handle="k8s-pod-network.592858cfa825a8cb73c2b63b02d2e2e44fd43622bdbdc81ff9a278317512dacc" host="172-238-161-65" Apr 24 23:35:18.186910 containerd[1467]: 2026-04-24 23:35:18.112 [INFO][3578] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.40.5/26] handle="k8s-pod-network.592858cfa825a8cb73c2b63b02d2e2e44fd43622bdbdc81ff9a278317512dacc" host="172-238-161-65" Apr 24 23:35:18.186910 containerd[1467]: 2026-04-24 23:35:18.112 [INFO][3578] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:35:18.186910 containerd[1467]: 2026-04-24 23:35:18.112 [INFO][3578] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.40.5/26] IPv6=[] ContainerID="592858cfa825a8cb73c2b63b02d2e2e44fd43622bdbdc81ff9a278317512dacc" HandleID="k8s-pod-network.592858cfa825a8cb73c2b63b02d2e2e44fd43622bdbdc81ff9a278317512dacc" Workload="172--238--161--65-k8s-coredns--7d764666f9--8hxr9-eth0" Apr 24 23:35:18.187578 containerd[1467]: 2026-04-24 23:35:18.128 [INFO][3498] cni-plugin/k8s.go 418: Populated endpoint ContainerID="592858cfa825a8cb73c2b63b02d2e2e44fd43622bdbdc81ff9a278317512dacc" Namespace="kube-system" Pod="coredns-7d764666f9-8hxr9" WorkloadEndpoint="172--238--161--65-k8s-coredns--7d764666f9--8hxr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--65-k8s-coredns--7d764666f9--8hxr9-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"714bc81a-57d7-4165-96f9-6fce5ffe62a9", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 34, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-65", ContainerID:"", Pod:"coredns-7d764666f9-8hxr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d0743194f8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:35:18.187578 containerd[1467]: 2026-04-24 23:35:18.129 [INFO][3498] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.5/32] ContainerID="592858cfa825a8cb73c2b63b02d2e2e44fd43622bdbdc81ff9a278317512dacc" Namespace="kube-system" Pod="coredns-7d764666f9-8hxr9" WorkloadEndpoint="172--238--161--65-k8s-coredns--7d764666f9--8hxr9-eth0" Apr 24 23:35:18.187578 containerd[1467]: 2026-04-24 23:35:18.129 [INFO][3498] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d0743194f8 ContainerID="592858cfa825a8cb73c2b63b02d2e2e44fd43622bdbdc81ff9a278317512dacc" Namespace="kube-system" Pod="coredns-7d764666f9-8hxr9" WorkloadEndpoint="172--238--161--65-k8s-coredns--7d764666f9--8hxr9-eth0" Apr 24 23:35:18.187578 containerd[1467]: 2026-04-24 23:35:18.143 [INFO][3498] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="592858cfa825a8cb73c2b63b02d2e2e44fd43622bdbdc81ff9a278317512dacc" Namespace="kube-system" Pod="coredns-7d764666f9-8hxr9" WorkloadEndpoint="172--238--161--65-k8s-coredns--7d764666f9--8hxr9-eth0" Apr 24 23:35:18.187578 containerd[1467]: 2026-04-24 23:35:18.146 [INFO][3498] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="592858cfa825a8cb73c2b63b02d2e2e44fd43622bdbdc81ff9a278317512dacc" Namespace="kube-system" Pod="coredns-7d764666f9-8hxr9" WorkloadEndpoint="172--238--161--65-k8s-coredns--7d764666f9--8hxr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--65-k8s-coredns--7d764666f9--8hxr9-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"714bc81a-57d7-4165-96f9-6fce5ffe62a9", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 34, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-65", ContainerID:"592858cfa825a8cb73c2b63b02d2e2e44fd43622bdbdc81ff9a278317512dacc", Pod:"coredns-7d764666f9-8hxr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d0743194f8", MAC:"1e:0c:ef:fa:f7:fe", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:35:18.187578 containerd[1467]: 2026-04-24 23:35:18.180 [INFO][3498] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="592858cfa825a8cb73c2b63b02d2e2e44fd43622bdbdc81ff9a278317512dacc" Namespace="kube-system" Pod="coredns-7d764666f9-8hxr9" WorkloadEndpoint="172--238--161--65-k8s-coredns--7d764666f9--8hxr9-eth0" Apr 24 23:35:18.205538 systemd-networkd[1388]: calieb0d5688881: Link UP Apr 24 23:35:18.207797 systemd-networkd[1388]: calieb0d5688881: Gained carrier Apr 24 23:35:18.230068 containerd[1467]: 2026-04-24 23:35:17.403 [ERROR][3509] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:35:18.230068 containerd[1467]: 2026-04-24 23:35:17.451 [INFO][3509] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--161--65-k8s-calico--kube--controllers--78fd48d56d--w6wzs-eth0 calico-kube-controllers-78fd48d56d- calico-system f6312e20-cf69-45d0-8185-eca763f065e1 889 0 2026-04-24 23:35:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78fd48d56d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-238-161-65 calico-kube-controllers-78fd48d56d-w6wzs eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calieb0d5688881 [] [] }} ContainerID="b7958ffbac67856347efd44ea7d739ce3abf8fe027a3beb0f0f3b9d05d086306" Namespace="calico-system" Pod="calico-kube-controllers-78fd48d56d-w6wzs" WorkloadEndpoint="172--238--161--65-k8s-calico--kube--controllers--78fd48d56d--w6wzs-" Apr 24 23:35:18.230068 containerd[1467]: 2026-04-24 23:35:17.451 [INFO][3509] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b7958ffbac67856347efd44ea7d739ce3abf8fe027a3beb0f0f3b9d05d086306" Namespace="calico-system" Pod="calico-kube-controllers-78fd48d56d-w6wzs" WorkloadEndpoint="172--238--161--65-k8s-calico--kube--controllers--78fd48d56d--w6wzs-eth0" Apr 24 23:35:18.230068 containerd[1467]: 2026-04-24 23:35:17.610 [INFO][3586] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7958ffbac67856347efd44ea7d739ce3abf8fe027a3beb0f0f3b9d05d086306" HandleID="k8s-pod-network.b7958ffbac67856347efd44ea7d739ce3abf8fe027a3beb0f0f3b9d05d086306" Workload="172--238--161--65-k8s-calico--kube--controllers--78fd48d56d--w6wzs-eth0" Apr 24 23:35:18.230068 containerd[1467]: 2026-04-24 23:35:17.626 [INFO][3586] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b7958ffbac67856347efd44ea7d739ce3abf8fe027a3beb0f0f3b9d05d086306" HandleID="k8s-pod-network.b7958ffbac67856347efd44ea7d739ce3abf8fe027a3beb0f0f3b9d05d086306" Workload="172--238--161--65-k8s-calico--kube--controllers--78fd48d56d--w6wzs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b9900), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-161-65", "pod":"calico-kube-controllers-78fd48d56d-w6wzs", "timestamp":"2026-04-24 23:35:17.610864659 +0000 UTC"}, Hostname:"172-238-161-65", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005a49a0)} Apr 24 23:35:18.230068 containerd[1467]: 2026-04-24 23:35:17.626 [INFO][3586] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:35:18.230068 containerd[1467]: 2026-04-24 23:35:18.112 [INFO][3586] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:35:18.230068 containerd[1467]: 2026-04-24 23:35:18.112 [INFO][3586] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-161-65' Apr 24 23:35:18.230068 containerd[1467]: 2026-04-24 23:35:18.126 [INFO][3586] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b7958ffbac67856347efd44ea7d739ce3abf8fe027a3beb0f0f3b9d05d086306" host="172-238-161-65" Apr 24 23:35:18.230068 containerd[1467]: 2026-04-24 23:35:18.137 [INFO][3586] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-161-65" Apr 24 23:35:18.230068 containerd[1467]: 2026-04-24 23:35:18.166 [INFO][3586] ipam/ipam.go 526: Trying affinity for 192.168.40.0/26 host="172-238-161-65" Apr 24 23:35:18.230068 containerd[1467]: 2026-04-24 23:35:18.172 [INFO][3586] ipam/ipam.go 160: Attempting to load block cidr=192.168.40.0/26 host="172-238-161-65" Apr 24 23:35:18.230068 containerd[1467]: 2026-04-24 23:35:18.176 [INFO][3586] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="172-238-161-65" Apr 24 23:35:18.230068 containerd[1467]: 2026-04-24 23:35:18.177 [INFO][3586] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.b7958ffbac67856347efd44ea7d739ce3abf8fe027a3beb0f0f3b9d05d086306" host="172-238-161-65" Apr 24 23:35:18.230068 containerd[1467]: 2026-04-24 23:35:18.180 [INFO][3586] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b7958ffbac67856347efd44ea7d739ce3abf8fe027a3beb0f0f3b9d05d086306 Apr 24 23:35:18.230068 containerd[1467]: 2026-04-24 23:35:18.185 [INFO][3586] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.b7958ffbac67856347efd44ea7d739ce3abf8fe027a3beb0f0f3b9d05d086306" host="172-238-161-65" Apr 24 23:35:18.230068 containerd[1467]: 2026-04-24 23:35:18.190 [INFO][3586] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.40.6/26] block=192.168.40.0/26 handle="k8s-pod-network.b7958ffbac67856347efd44ea7d739ce3abf8fe027a3beb0f0f3b9d05d086306" host="172-238-161-65" Apr 24 23:35:18.230068 containerd[1467]: 2026-04-24 23:35:18.191 [INFO][3586] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.40.6/26] handle="k8s-pod-network.b7958ffbac67856347efd44ea7d739ce3abf8fe027a3beb0f0f3b9d05d086306" host="172-238-161-65" Apr 24 23:35:18.230068 containerd[1467]: 2026-04-24 23:35:18.191 [INFO][3586] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:35:18.230068 containerd[1467]: 2026-04-24 23:35:18.191 [INFO][3586] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.40.6/26] IPv6=[] ContainerID="b7958ffbac67856347efd44ea7d739ce3abf8fe027a3beb0f0f3b9d05d086306" HandleID="k8s-pod-network.b7958ffbac67856347efd44ea7d739ce3abf8fe027a3beb0f0f3b9d05d086306" Workload="172--238--161--65-k8s-calico--kube--controllers--78fd48d56d--w6wzs-eth0" Apr 24 23:35:18.231242 containerd[1467]: 2026-04-24 23:35:18.200 [INFO][3509] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b7958ffbac67856347efd44ea7d739ce3abf8fe027a3beb0f0f3b9d05d086306" Namespace="calico-system" Pod="calico-kube-controllers-78fd48d56d-w6wzs" WorkloadEndpoint="172--238--161--65-k8s-calico--kube--controllers--78fd48d56d--w6wzs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--65-k8s-calico--kube--controllers--78fd48d56d--w6wzs-eth0", GenerateName:"calico-kube-controllers-78fd48d56d-", Namespace:"calico-system", SelfLink:"", UID:"f6312e20-cf69-45d0-8185-eca763f065e1", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 35, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78fd48d56d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-65", ContainerID:"", Pod:"calico-kube-controllers-78fd48d56d-w6wzs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calieb0d5688881", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:35:18.231242 containerd[1467]: 2026-04-24 23:35:18.200 [INFO][3509] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.6/32] ContainerID="b7958ffbac67856347efd44ea7d739ce3abf8fe027a3beb0f0f3b9d05d086306" Namespace="calico-system" Pod="calico-kube-controllers-78fd48d56d-w6wzs" WorkloadEndpoint="172--238--161--65-k8s-calico--kube--controllers--78fd48d56d--w6wzs-eth0" Apr 24 23:35:18.231242 containerd[1467]: 2026-04-24 23:35:18.200 [INFO][3509] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb0d5688881 ContainerID="b7958ffbac67856347efd44ea7d739ce3abf8fe027a3beb0f0f3b9d05d086306" Namespace="calico-system" Pod="calico-kube-controllers-78fd48d56d-w6wzs" WorkloadEndpoint="172--238--161--65-k8s-calico--kube--controllers--78fd48d56d--w6wzs-eth0" Apr 24 23:35:18.231242 containerd[1467]: 2026-04-24 23:35:18.210 [INFO][3509] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7958ffbac67856347efd44ea7d739ce3abf8fe027a3beb0f0f3b9d05d086306" Namespace="calico-system" Pod="calico-kube-controllers-78fd48d56d-w6wzs" WorkloadEndpoint="172--238--161--65-k8s-calico--kube--controllers--78fd48d56d--w6wzs-eth0" Apr 24 23:35:18.231242 containerd[1467]: 2026-04-24 23:35:18.210 [INFO][3509] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b7958ffbac67856347efd44ea7d739ce3abf8fe027a3beb0f0f3b9d05d086306" Namespace="calico-system" Pod="calico-kube-controllers-78fd48d56d-w6wzs" WorkloadEndpoint="172--238--161--65-k8s-calico--kube--controllers--78fd48d56d--w6wzs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--65-k8s-calico--kube--controllers--78fd48d56d--w6wzs-eth0", GenerateName:"calico-kube-controllers-78fd48d56d-", Namespace:"calico-system", SelfLink:"", UID:"f6312e20-cf69-45d0-8185-eca763f065e1", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 35, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78fd48d56d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-65", ContainerID:"b7958ffbac67856347efd44ea7d739ce3abf8fe027a3beb0f0f3b9d05d086306", Pod:"calico-kube-controllers-78fd48d56d-w6wzs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calieb0d5688881", MAC:"82:b9:88:f3:da:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:35:18.231242 containerd[1467]: 2026-04-24 23:35:18.225 [INFO][3509] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b7958ffbac67856347efd44ea7d739ce3abf8fe027a3beb0f0f3b9d05d086306" Namespace="calico-system" Pod="calico-kube-controllers-78fd48d56d-w6wzs" WorkloadEndpoint="172--238--161--65-k8s-calico--kube--controllers--78fd48d56d--w6wzs-eth0" Apr 24 23:35:18.279759 systemd-networkd[1388]: cali5252b73c6d4: Link UP Apr 24 23:35:18.280000 systemd-networkd[1388]: cali5252b73c6d4: Gained carrier Apr 24 23:35:18.280662 containerd[1467]: time="2026-04-24T23:35:18.278607700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:35:18.280662 containerd[1467]: time="2026-04-24T23:35:18.280392933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:35:18.280662 containerd[1467]: time="2026-04-24T23:35:18.280412393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:35:18.280662 containerd[1467]: time="2026-04-24T23:35:18.280549619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:35:18.287114 containerd[1467]: time="2026-04-24T23:35:18.286843132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccd877f8d-72gcn,Uid:d831dedb-5364-4c11-9554-a57872730716,Namespace:calico-system,Attempt:0,} returns sandbox id \"92e7d3d645c79e667e62dc5815486898c8c96fa57cfa4232a1824a4958f7fa5b\"" Apr 24 23:35:18.296963 containerd[1467]: time="2026-04-24T23:35:18.296887087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:35:18.297155 containerd[1467]: time="2026-04-24T23:35:18.296937285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:35:18.297155 containerd[1467]: time="2026-04-24T23:35:18.297125260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:35:18.297437 containerd[1467]: time="2026-04-24T23:35:18.297403083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:35:18.302688 containerd[1467]: 2026-04-24 23:35:17.445 [ERROR][3541] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:35:18.302688 containerd[1467]: 2026-04-24 23:35:17.481 [INFO][3541] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--161--65-k8s-whisker--74b6665b9d--d5ddg-eth0 whisker-74b6665b9d- calico-system c5f4d09e-c860-46dc-87d1-e132c91058f2 905 0 2026-04-24 23:35:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:74b6665b9d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-238-161-65 whisker-74b6665b9d-d5ddg eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5252b73c6d4 [] [] }} ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Namespace="calico-system" Pod="whisker-74b6665b9d-d5ddg" WorkloadEndpoint="172--238--161--65-k8s-whisker--74b6665b9d--d5ddg-" Apr 24 23:35:18.302688 containerd[1467]: 2026-04-24 23:35:17.484 [INFO][3541] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Namespace="calico-system" Pod="whisker-74b6665b9d-d5ddg" WorkloadEndpoint="172--238--161--65-k8s-whisker--74b6665b9d--d5ddg-eth0" Apr 24 23:35:18.302688 containerd[1467]: 2026-04-24 23:35:17.619 [INFO][3595] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" HandleID="k8s-pod-network.933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Workload="172--238--161--65-k8s-whisker--74b6665b9d--d5ddg-eth0" Apr 24 23:35:18.302688 containerd[1467]: 2026-04-24 23:35:17.633 [INFO][3595] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" HandleID="k8s-pod-network.933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Workload="172--238--161--65-k8s-whisker--74b6665b9d--d5ddg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fd90), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-161-65", "pod":"whisker-74b6665b9d-d5ddg", "timestamp":"2026-04-24 23:35:17.619255516 +0000 UTC"}, Hostname:"172-238-161-65", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001151e0)} Apr 24 23:35:18.302688 containerd[1467]: 2026-04-24 23:35:17.633 [INFO][3595] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:35:18.302688 containerd[1467]: 2026-04-24 23:35:18.191 [INFO][3595] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:35:18.302688 containerd[1467]: 2026-04-24 23:35:18.191 [INFO][3595] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-161-65' Apr 24 23:35:18.302688 containerd[1467]: 2026-04-24 23:35:18.220 [INFO][3595] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" host="172-238-161-65" Apr 24 23:35:18.302688 containerd[1467]: 2026-04-24 23:35:18.237 [INFO][3595] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-161-65" Apr 24 23:35:18.302688 containerd[1467]: 2026-04-24 23:35:18.245 [INFO][3595] ipam/ipam.go 526: Trying affinity for 192.168.40.0/26 host="172-238-161-65" Apr 24 23:35:18.302688 containerd[1467]: 2026-04-24 23:35:18.248 [INFO][3595] ipam/ipam.go 160: Attempting to load block cidr=192.168.40.0/26 host="172-238-161-65" Apr 24 23:35:18.302688 containerd[1467]: 2026-04-24 23:35:18.252 [INFO][3595] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="172-238-161-65" Apr 24 23:35:18.302688 containerd[1467]: 2026-04-24 23:35:18.252 [INFO][3595] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" host="172-238-161-65" Apr 24 23:35:18.302688 containerd[1467]: 2026-04-24 23:35:18.254 [INFO][3595] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068 Apr 24 23:35:18.302688 containerd[1467]: 2026-04-24 23:35:18.260 [INFO][3595] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" host="172-238-161-65" Apr 24 23:35:18.302688 containerd[1467]: 2026-04-24 23:35:18.269 [INFO][3595] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.40.7/26] block=192.168.40.0/26 handle="k8s-pod-network.933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" host="172-238-161-65" Apr 24 23:35:18.302688 containerd[1467]: 2026-04-24 23:35:18.269 [INFO][3595] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.40.7/26] handle="k8s-pod-network.933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" host="172-238-161-65" Apr 24 23:35:18.302688 containerd[1467]: 2026-04-24 23:35:18.269 [INFO][3595] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:35:18.302688 containerd[1467]: 2026-04-24 23:35:18.269 [INFO][3595] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.40.7/26] IPv6=[] ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" HandleID="k8s-pod-network.933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Workload="172--238--161--65-k8s-whisker--74b6665b9d--d5ddg-eth0" Apr 24 23:35:18.303168 containerd[1467]: 2026-04-24 23:35:18.273 [INFO][3541] cni-plugin/k8s.go 418: Populated endpoint ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Namespace="calico-system" Pod="whisker-74b6665b9d-d5ddg" WorkloadEndpoint="172--238--161--65-k8s-whisker--74b6665b9d--d5ddg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--65-k8s-whisker--74b6665b9d--d5ddg-eth0", GenerateName:"whisker-74b6665b9d-", Namespace:"calico-system", SelfLink:"", UID:"c5f4d09e-c860-46dc-87d1-e132c91058f2", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 35, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74b6665b9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-65", ContainerID:"", Pod:"whisker-74b6665b9d-d5ddg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.40.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5252b73c6d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:35:18.303168 containerd[1467]: 2026-04-24 23:35:18.273 [INFO][3541] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.7/32] ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Namespace="calico-system" Pod="whisker-74b6665b9d-d5ddg" WorkloadEndpoint="172--238--161--65-k8s-whisker--74b6665b9d--d5ddg-eth0" Apr 24 23:35:18.303168 containerd[1467]: 2026-04-24 23:35:18.273 [INFO][3541] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5252b73c6d4 ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Namespace="calico-system" Pod="whisker-74b6665b9d-d5ddg" WorkloadEndpoint="172--238--161--65-k8s-whisker--74b6665b9d--d5ddg-eth0" Apr 24 23:35:18.303168 containerd[1467]: 2026-04-24 23:35:18.280 [INFO][3541] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Namespace="calico-system" Pod="whisker-74b6665b9d-d5ddg" WorkloadEndpoint="172--238--161--65-k8s-whisker--74b6665b9d--d5ddg-eth0" Apr 24 23:35:18.303168 containerd[1467]: 2026-04-24 23:35:18.282 [INFO][3541] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Namespace="calico-system" Pod="whisker-74b6665b9d-d5ddg" WorkloadEndpoint="172--238--161--65-k8s-whisker--74b6665b9d--d5ddg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--65-k8s-whisker--74b6665b9d--d5ddg-eth0", GenerateName:"whisker-74b6665b9d-", Namespace:"calico-system", SelfLink:"", UID:"c5f4d09e-c860-46dc-87d1-e132c91058f2", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 35, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74b6665b9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-65", ContainerID:"933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068", Pod:"whisker-74b6665b9d-d5ddg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.40.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5252b73c6d4", MAC:"ce:cb:b9:59:2d:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:35:18.303168 containerd[1467]: 2026-04-24 23:35:18.294 [INFO][3541] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Namespace="calico-system" Pod="whisker-74b6665b9d-d5ddg" WorkloadEndpoint="172--238--161--65-k8s-whisker--74b6665b9d--d5ddg-eth0" Apr 24 23:35:18.351975 systemd[1]: Started cri-containerd-592858cfa825a8cb73c2b63b02d2e2e44fd43622bdbdc81ff9a278317512dacc.scope - libcontainer container 592858cfa825a8cb73c2b63b02d2e2e44fd43622bdbdc81ff9a278317512dacc. Apr 24 23:35:18.357856 containerd[1467]: time="2026-04-24T23:35:18.357566781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccd877f8d-t49xs,Uid:8dcb464f-862f-44ec-a163-718ce414c666,Namespace:calico-system,Attempt:0,} returns sandbox id \"682f42fcd3b81a7627cc491e576b9a05c6d073ae741f608e61405c242837177f\"" Apr 24 23:35:18.357663 systemd[1]: Started cri-containerd-b7958ffbac67856347efd44ea7d739ce3abf8fe027a3beb0f0f3b9d05d086306.scope - libcontainer container b7958ffbac67856347efd44ea7d739ce3abf8fe027a3beb0f0f3b9d05d086306. Apr 24 23:35:18.377756 containerd[1467]: time="2026-04-24T23:35:18.377087224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:35:18.377756 containerd[1467]: time="2026-04-24T23:35:18.377138613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:35:18.377756 containerd[1467]: time="2026-04-24T23:35:18.377152132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:35:18.377756 containerd[1467]: time="2026-04-24T23:35:18.377230700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:35:18.397249 systemd-networkd[1388]: califa694b8914c: Link UP Apr 24 23:35:18.399152 systemd-networkd[1388]: califa694b8914c: Gained carrier Apr 24 23:35:18.423872 systemd[1]: Started cri-containerd-933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068.scope - libcontainer container 933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068. Apr 24 23:35:18.431241 containerd[1467]: 2026-04-24 23:35:17.932 [ERROR][3725] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:35:18.431241 containerd[1467]: 2026-04-24 23:35:17.963 [INFO][3725] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--161--65-k8s-csi--node--driver--hqhqx-eth0 csi-node-driver- calico-system 53bf7884-6c7b-4ee5-be4f-549901e455a2 755 0 2026-04-24 23:35:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-238-161-65 csi-node-driver-hqhqx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califa694b8914c [] [] }} ContainerID="c8dbe5ef582c0aae78326859dc32f27b6250e1d837b6917ab4e314abe8e34ebd" Namespace="calico-system" Pod="csi-node-driver-hqhqx" WorkloadEndpoint="172--238--161--65-k8s-csi--node--driver--hqhqx-" Apr 24 23:35:18.431241 containerd[1467]: 2026-04-24 23:35:17.963 [INFO][3725] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c8dbe5ef582c0aae78326859dc32f27b6250e1d837b6917ab4e314abe8e34ebd" Namespace="calico-system" Pod="csi-node-driver-hqhqx" WorkloadEndpoint="172--238--161--65-k8s-csi--node--driver--hqhqx-eth0" Apr 24 23:35:18.431241 containerd[1467]: 2026-04-24 23:35:18.031 [INFO][3783] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8dbe5ef582c0aae78326859dc32f27b6250e1d837b6917ab4e314abe8e34ebd" HandleID="k8s-pod-network.c8dbe5ef582c0aae78326859dc32f27b6250e1d837b6917ab4e314abe8e34ebd" Workload="172--238--161--65-k8s-csi--node--driver--hqhqx-eth0" Apr 24 23:35:18.431241 containerd[1467]: 2026-04-24 23:35:18.048 [INFO][3783] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c8dbe5ef582c0aae78326859dc32f27b6250e1d837b6917ab4e314abe8e34ebd" HandleID="k8s-pod-network.c8dbe5ef582c0aae78326859dc32f27b6250e1d837b6917ab4e314abe8e34ebd" Workload="172--238--161--65-k8s-csi--node--driver--hqhqx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002614c0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-161-65", "pod":"csi-node-driver-hqhqx", "timestamp":"2026-04-24 23:35:18.031366773 +0000 UTC"}, Hostname:"172-238-161-65", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00023ef20)} Apr 24 23:35:18.431241 containerd[1467]: 2026-04-24 23:35:18.048 [INFO][3783] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:35:18.431241 containerd[1467]: 2026-04-24 23:35:18.270 [INFO][3783] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:35:18.431241 containerd[1467]: 2026-04-24 23:35:18.270 [INFO][3783] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-161-65' Apr 24 23:35:18.431241 containerd[1467]: 2026-04-24 23:35:18.321 [INFO][3783] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c8dbe5ef582c0aae78326859dc32f27b6250e1d837b6917ab4e314abe8e34ebd" host="172-238-161-65" Apr 24 23:35:18.431241 containerd[1467]: 2026-04-24 23:35:18.336 [INFO][3783] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-161-65" Apr 24 23:35:18.431241 containerd[1467]: 2026-04-24 23:35:18.346 [INFO][3783] ipam/ipam.go 526: Trying affinity for 192.168.40.0/26 host="172-238-161-65" Apr 24 23:35:18.431241 containerd[1467]: 2026-04-24 23:35:18.351 [INFO][3783] ipam/ipam.go 160: Attempting to load block cidr=192.168.40.0/26 host="172-238-161-65" Apr 24 23:35:18.431241 containerd[1467]: 2026-04-24 23:35:18.354 [INFO][3783] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="172-238-161-65" Apr 24 23:35:18.431241 containerd[1467]: 2026-04-24 23:35:18.354 [INFO][3783] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.c8dbe5ef582c0aae78326859dc32f27b6250e1d837b6917ab4e314abe8e34ebd" host="172-238-161-65" Apr 24 23:35:18.431241 containerd[1467]: 2026-04-24 23:35:18.359 [INFO][3783] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c8dbe5ef582c0aae78326859dc32f27b6250e1d837b6917ab4e314abe8e34ebd Apr 24 23:35:18.431241 containerd[1467]: 2026-04-24 23:35:18.374 [INFO][3783] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.c8dbe5ef582c0aae78326859dc32f27b6250e1d837b6917ab4e314abe8e34ebd" host="172-238-161-65" Apr 24 23:35:18.431241 containerd[1467]: 2026-04-24 23:35:18.384 [INFO][3783] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.40.8/26] block=192.168.40.0/26 handle="k8s-pod-network.c8dbe5ef582c0aae78326859dc32f27b6250e1d837b6917ab4e314abe8e34ebd" host="172-238-161-65" Apr 24 23:35:18.431241 containerd[1467]: 2026-04-24 23:35:18.384 [INFO][3783] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.40.8/26] handle="k8s-pod-network.c8dbe5ef582c0aae78326859dc32f27b6250e1d837b6917ab4e314abe8e34ebd" host="172-238-161-65" Apr 24 23:35:18.431241 containerd[1467]: 2026-04-24 23:35:18.384 [INFO][3783] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:35:18.431241 containerd[1467]: 2026-04-24 23:35:18.384 [INFO][3783] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.40.8/26] IPv6=[] ContainerID="c8dbe5ef582c0aae78326859dc32f27b6250e1d837b6917ab4e314abe8e34ebd" HandleID="k8s-pod-network.c8dbe5ef582c0aae78326859dc32f27b6250e1d837b6917ab4e314abe8e34ebd" Workload="172--238--161--65-k8s-csi--node--driver--hqhqx-eth0" Apr 24 23:35:18.431841 containerd[1467]: 2026-04-24 23:35:18.392 [INFO][3725] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c8dbe5ef582c0aae78326859dc32f27b6250e1d837b6917ab4e314abe8e34ebd" Namespace="calico-system" Pod="csi-node-driver-hqhqx" WorkloadEndpoint="172--238--161--65-k8s-csi--node--driver--hqhqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--65-k8s-csi--node--driver--hqhqx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"53bf7884-6c7b-4ee5-be4f-549901e455a2", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 35, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-65", ContainerID:"", Pod:"csi-node-driver-hqhqx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.40.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califa694b8914c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:35:18.431841 containerd[1467]: 2026-04-24 23:35:18.392 [INFO][3725] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.8/32] ContainerID="c8dbe5ef582c0aae78326859dc32f27b6250e1d837b6917ab4e314abe8e34ebd" Namespace="calico-system" Pod="csi-node-driver-hqhqx" WorkloadEndpoint="172--238--161--65-k8s-csi--node--driver--hqhqx-eth0" Apr 24 23:35:18.431841 containerd[1467]: 2026-04-24 23:35:18.393 [INFO][3725] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa694b8914c ContainerID="c8dbe5ef582c0aae78326859dc32f27b6250e1d837b6917ab4e314abe8e34ebd" Namespace="calico-system" Pod="csi-node-driver-hqhqx" WorkloadEndpoint="172--238--161--65-k8s-csi--node--driver--hqhqx-eth0" Apr 24 23:35:18.431841 containerd[1467]: 2026-04-24 23:35:18.401 [INFO][3725] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8dbe5ef582c0aae78326859dc32f27b6250e1d837b6917ab4e314abe8e34ebd" Namespace="calico-system" Pod="csi-node-driver-hqhqx" WorkloadEndpoint="172--238--161--65-k8s-csi--node--driver--hqhqx-eth0" Apr 24 23:35:18.431841 containerd[1467]: 2026-04-24 23:35:18.401 [INFO][3725] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c8dbe5ef582c0aae78326859dc32f27b6250e1d837b6917ab4e314abe8e34ebd" Namespace="calico-system" Pod="csi-node-driver-hqhqx" WorkloadEndpoint="172--238--161--65-k8s-csi--node--driver--hqhqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--65-k8s-csi--node--driver--hqhqx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"53bf7884-6c7b-4ee5-be4f-549901e455a2", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 35, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-65", ContainerID:"c8dbe5ef582c0aae78326859dc32f27b6250e1d837b6917ab4e314abe8e34ebd", Pod:"csi-node-driver-hqhqx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.40.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califa694b8914c", MAC:"7a:c5:21:08:9a:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:35:18.431841 containerd[1467]: 2026-04-24 23:35:18.427 [INFO][3725] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c8dbe5ef582c0aae78326859dc32f27b6250e1d837b6917ab4e314abe8e34ebd" Namespace="calico-system" Pod="csi-node-driver-hqhqx" WorkloadEndpoint="172--238--161--65-k8s-csi--node--driver--hqhqx-eth0" Apr 24 23:35:18.451977 containerd[1467]: time="2026-04-24T23:35:18.451165534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-8hxr9,Uid:714bc81a-57d7-4165-96f9-6fce5ffe62a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"592858cfa825a8cb73c2b63b02d2e2e44fd43622bdbdc81ff9a278317512dacc\"" Apr 24 23:35:18.452756 kubelet[2551]: E0424 23:35:18.452612 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:35:18.464058 containerd[1467]: time="2026-04-24T23:35:18.462509144Z" level=info msg="CreateContainer within sandbox \"592858cfa825a8cb73c2b63b02d2e2e44fd43622bdbdc81ff9a278317512dacc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:35:18.491469 containerd[1467]: time="2026-04-24T23:35:18.491440498Z" level=info msg="CreateContainer within sandbox \"592858cfa825a8cb73c2b63b02d2e2e44fd43622bdbdc81ff9a278317512dacc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"895932576a1e8e8541fe85acb8c80996eb55e7da1d56106d97c325cf6b91eea9\"" Apr 24 23:35:18.492417 containerd[1467]: time="2026-04-24T23:35:18.492315155Z" level=info msg="StartContainer for \"895932576a1e8e8541fe85acb8c80996eb55e7da1d56106d97c325cf6b91eea9\"" Apr 24 23:35:18.508727 containerd[1467]: time="2026-04-24T23:35:18.508398289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:35:18.508727 containerd[1467]: time="2026-04-24T23:35:18.508468127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:35:18.508727 containerd[1467]: time="2026-04-24T23:35:18.508482147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:35:18.508727 containerd[1467]: time="2026-04-24T23:35:18.508559475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:35:18.509282 containerd[1467]: time="2026-04-24T23:35:18.509246487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78fd48d56d-w6wzs,Uid:f6312e20-cf69-45d0-8185-eca763f065e1,Namespace:calico-system,Attempt:0,} returns sandbox id \"b7958ffbac67856347efd44ea7d739ce3abf8fe027a3beb0f0f3b9d05d086306\"" Apr 24 23:35:18.514193 containerd[1467]: time="2026-04-24T23:35:18.513911523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74b6665b9d-d5ddg,Uid:c5f4d09e-c860-46dc-87d1-e132c91058f2,Namespace:calico-system,Attempt:0,} returns sandbox id \"933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068\"" Apr 24 23:35:18.539116 systemd[1]: Started cri-containerd-c8dbe5ef582c0aae78326859dc32f27b6250e1d837b6917ab4e314abe8e34ebd.scope - libcontainer container c8dbe5ef582c0aae78326859dc32f27b6250e1d837b6917ab4e314abe8e34ebd. Apr 24 23:35:18.546802 systemd[1]: Started cri-containerd-895932576a1e8e8541fe85acb8c80996eb55e7da1d56106d97c325cf6b91eea9.scope - libcontainer container 895932576a1e8e8541fe85acb8c80996eb55e7da1d56106d97c325cf6b91eea9. Apr 24 23:35:18.580168 containerd[1467]: time="2026-04-24T23:35:18.579964165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hqhqx,Uid:53bf7884-6c7b-4ee5-be4f-549901e455a2,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8dbe5ef582c0aae78326859dc32f27b6250e1d837b6917ab4e314abe8e34ebd\"" Apr 24 23:35:18.585127 containerd[1467]: time="2026-04-24T23:35:18.585094879Z" level=info msg="StartContainer for \"895932576a1e8e8541fe85acb8c80996eb55e7da1d56106d97c325cf6b91eea9\" returns successfully" Apr 24 23:35:18.911524 kubelet[2551]: E0424 23:35:18.911490 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:35:18.920646 kubelet[2551]: E0424 23:35:18.920618 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:35:18.926247 kubelet[2551]: I0424 23:35:18.926210 2551 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-8hxr9" podStartSLOduration=24.926201502 podStartE2EDuration="24.926201502s" podCreationTimestamp="2026-04-24 23:34:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:35:18.922761873 +0000 UTC m=+31.260958415" watchObservedRunningTime="2026-04-24 23:35:18.926201502 +0000 UTC m=+31.264398024" Apr 24 23:35:18.936465 kubelet[2551]: I0424 23:35:18.935126 2551 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:35:18.993712 kubelet[2551]: I0424 23:35:18.992497 2551 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-lg98j" podStartSLOduration=24.992486478 podStartE2EDuration="24.992486478s" podCreationTimestamp="2026-04-24 23:34:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:35:18.942266597 +0000 UTC m=+31.280463119" watchObservedRunningTime="2026-04-24 23:35:18.992486478 +0000 UTC m=+31.330683000" Apr 24 23:35:19.005087 systemd-networkd[1388]: cali9f73432d27f: Gained IPv6LL Apr 24 23:35:19.299018 systemd[1]: run-containerd-runc-k8s.io-9ac0f281975c24155283aaa4381177c841e90158b1d90cea59fe17482b63cb69-runc.6gSSBI.mount: Deactivated successfully. Apr 24 23:35:19.452855 systemd-networkd[1388]: cali92712000602: Gained IPv6LL Apr 24 23:35:19.515850 systemd-networkd[1388]: cali5252b73c6d4: Gained IPv6LL Apr 24 23:35:19.579869 systemd-networkd[1388]: calieb0d5688881: Gained IPv6LL Apr 24 23:35:19.708445 systemd-networkd[1388]: cali02b85110d06: Gained IPv6LL Apr 24 23:35:19.771818 systemd-networkd[1388]: calid54d36e3e06: Gained IPv6LL Apr 24 23:35:19.899905 systemd-networkd[1388]: cali9d0743194f8: Gained IPv6LL Apr 24 23:35:19.936410 kubelet[2551]: E0424 23:35:19.936353 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:35:19.937914 kubelet[2551]: E0424 23:35:19.937898 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:35:20.095452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3511957970.mount: Deactivated successfully. Apr 24 23:35:20.283848 systemd-networkd[1388]: califa694b8914c: Gained IPv6LL Apr 24 23:35:20.498240 containerd[1467]: time="2026-04-24T23:35:20.498198575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:20.499062 containerd[1467]: time="2026-04-24T23:35:20.499008056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 24 23:35:20.499528 containerd[1467]: time="2026-04-24T23:35:20.499487724Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:20.501718 containerd[1467]: time="2026-04-24T23:35:20.501664692Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:20.503025 containerd[1467]: time="2026-04-24T23:35:20.503002559Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.43297484s" Apr 24 23:35:20.503080 containerd[1467]: time="2026-04-24T23:35:20.503029009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 24 23:35:20.505534 containerd[1467]: time="2026-04-24T23:35:20.504100523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 24 23:35:20.507556 containerd[1467]: time="2026-04-24T23:35:20.507444223Z" level=info msg="CreateContainer within sandbox \"96d306f9c80dd25958c96b3e171861e6f13cd833b38aa027f8e60b137c7e6066\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 24 23:35:20.523554 containerd[1467]: time="2026-04-24T23:35:20.523530546Z" level=info msg="CreateContainer within sandbox \"96d306f9c80dd25958c96b3e171861e6f13cd833b38aa027f8e60b137c7e6066\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"a41de66d53574e24320a20e18cc6464c1a96a78852729adfd24ad95972862d08\"" Apr 24 23:35:20.524107 containerd[1467]: time="2026-04-24T23:35:20.523944636Z" level=info msg="StartContainer for \"a41de66d53574e24320a20e18cc6464c1a96a78852729adfd24ad95972862d08\"" Apr 24 23:35:20.557788 systemd[1]: Started cri-containerd-a41de66d53574e24320a20e18cc6464c1a96a78852729adfd24ad95972862d08.scope - libcontainer container a41de66d53574e24320a20e18cc6464c1a96a78852729adfd24ad95972862d08. Apr 24 23:35:20.601309 containerd[1467]: time="2026-04-24T23:35:20.601041541Z" level=info msg="StartContainer for \"a41de66d53574e24320a20e18cc6464c1a96a78852729adfd24ad95972862d08\" returns successfully" Apr 24 23:35:20.946249 kubelet[2551]: E0424 23:35:20.945828 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:35:20.947017 kubelet[2551]: E0424 23:35:20.946186 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:35:20.964025 kubelet[2551]: I0424 23:35:20.963952 2551 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-tqq6w" podStartSLOduration=13.529980699 podStartE2EDuration="15.963934673s" podCreationTimestamp="2026-04-24 23:35:05 +0000 UTC" firstStartedPulling="2026-04-24 23:35:18.069629511 +0000 UTC m=+30.407826043" lastFinishedPulling="2026-04-24 23:35:20.503583485 +0000 UTC m=+32.841780017" observedRunningTime="2026-04-24 23:35:20.963332597 +0000 UTC m=+33.301529119" watchObservedRunningTime="2026-04-24 23:35:20.963934673 +0000 UTC m=+33.302131195" Apr 24 23:35:21.984164 systemd[1]: run-containerd-runc-k8s.io-a41de66d53574e24320a20e18cc6464c1a96a78852729adfd24ad95972862d08-runc.KLUAtI.mount: Deactivated successfully. Apr 24 23:35:22.350122 containerd[1467]: time="2026-04-24T23:35:22.349970378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:22.351210 containerd[1467]: time="2026-04-24T23:35:22.351153382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 24 23:35:22.352030 containerd[1467]: time="2026-04-24T23:35:22.351958665Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:22.354391 containerd[1467]: time="2026-04-24T23:35:22.354367922Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:22.355626 containerd[1467]: time="2026-04-24T23:35:22.355485507Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.851360754s" Apr 24 23:35:22.355626 containerd[1467]: time="2026-04-24T23:35:22.355519556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 24 23:35:22.357637 containerd[1467]: time="2026-04-24T23:35:22.357148221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 24 23:35:22.361324 containerd[1467]: time="2026-04-24T23:35:22.361297530Z" level=info msg="CreateContainer within sandbox \"92e7d3d645c79e667e62dc5815486898c8c96fa57cfa4232a1824a4958f7fa5b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 23:35:22.378462 containerd[1467]: time="2026-04-24T23:35:22.378421574Z" level=info msg="CreateContainer within sandbox \"92e7d3d645c79e667e62dc5815486898c8c96fa57cfa4232a1824a4958f7fa5b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e15364b7114821793edaa8baf4b6ff70fa0e922dbb8b7c4959105fe461eaba33\"" Apr 24 23:35:22.380450 containerd[1467]: time="2026-04-24T23:35:22.380309043Z" level=info msg="StartContainer for \"e15364b7114821793edaa8baf4b6ff70fa0e922dbb8b7c4959105fe461eaba33\"" Apr 24 23:35:22.455842 systemd[1]: Started cri-containerd-e15364b7114821793edaa8baf4b6ff70fa0e922dbb8b7c4959105fe461eaba33.scope - libcontainer container e15364b7114821793edaa8baf4b6ff70fa0e922dbb8b7c4959105fe461eaba33. Apr 24 23:35:22.543590 containerd[1467]: time="2026-04-24T23:35:22.543498423Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:22.547944 containerd[1467]: time="2026-04-24T23:35:22.547628483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 24 23:35:22.551392 containerd[1467]: time="2026-04-24T23:35:22.551349201Z" level=info msg="StartContainer for \"e15364b7114821793edaa8baf4b6ff70fa0e922dbb8b7c4959105fe461eaba33\" returns successfully" Apr 24 23:35:22.557143 containerd[1467]: time="2026-04-24T23:35:22.556971178Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 199.789188ms" Apr 24 23:35:22.557143 containerd[1467]: time="2026-04-24T23:35:22.557134484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 24 23:35:22.560560 containerd[1467]: time="2026-04-24T23:35:22.559185849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 24 23:35:22.564823 containerd[1467]: time="2026-04-24T23:35:22.564779656Z" level=info msg="CreateContainer within sandbox \"682f42fcd3b81a7627cc491e576b9a05c6d073ae741f608e61405c242837177f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 23:35:22.588766 containerd[1467]: time="2026-04-24T23:35:22.588708402Z" level=info msg="CreateContainer within sandbox \"682f42fcd3b81a7627cc491e576b9a05c6d073ae741f608e61405c242837177f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c8fb845c48d16237096db4f20eac24ba70d7a9904abc2b57e830850da1b9b32d\"" Apr 24 23:35:22.590152 containerd[1467]: time="2026-04-24T23:35:22.590065352Z" level=info msg="StartContainer for \"c8fb845c48d16237096db4f20eac24ba70d7a9904abc2b57e830850da1b9b32d\"" Apr 24 23:35:22.654320 systemd[1]: run-containerd-runc-k8s.io-c8fb845c48d16237096db4f20eac24ba70d7a9904abc2b57e830850da1b9b32d-runc.JAeQYu.mount: Deactivated successfully. Apr 24 23:35:22.664076 systemd[1]: Started cri-containerd-c8fb845c48d16237096db4f20eac24ba70d7a9904abc2b57e830850da1b9b32d.scope - libcontainer container c8fb845c48d16237096db4f20eac24ba70d7a9904abc2b57e830850da1b9b32d. Apr 24 23:35:22.728760 containerd[1467]: time="2026-04-24T23:35:22.728546724Z" level=info msg="StartContainer for \"c8fb845c48d16237096db4f20eac24ba70d7a9904abc2b57e830850da1b9b32d\" returns successfully" Apr 24 23:35:23.002317 kubelet[2551]: I0424 23:35:23.002128 2551 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-ccd877f8d-72gcn" podStartSLOduration=13.939175274 podStartE2EDuration="18.002113036s" podCreationTimestamp="2026-04-24 23:35:05 +0000 UTC" firstStartedPulling="2026-04-24 23:35:18.293931685 +0000 UTC m=+30.632128207" lastFinishedPulling="2026-04-24 23:35:22.356869447 +0000 UTC m=+34.695065969" observedRunningTime="2026-04-24 23:35:22.984741995 +0000 UTC m=+35.322938537" watchObservedRunningTime="2026-04-24 23:35:23.002113036 +0000 UTC m=+35.340309568" Apr 24 23:35:23.960214 kubelet[2551]: I0424 23:35:23.960153 2551 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:35:23.961454 kubelet[2551]: I0424 23:35:23.960757 2551 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:35:25.539230 containerd[1467]: time="2026-04-24T23:35:25.538909334Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:25.540938 containerd[1467]: time="2026-04-24T23:35:25.540057272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 24 23:35:25.541010 containerd[1467]: time="2026-04-24T23:35:25.540956044Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:25.543238 containerd[1467]: time="2026-04-24T23:35:25.542939496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:25.544317 containerd[1467]: time="2026-04-24T23:35:25.544275821Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.985060522s" Apr 24 23:35:25.544382 containerd[1467]: time="2026-04-24T23:35:25.544326830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 24 23:35:25.546234 containerd[1467]: time="2026-04-24T23:35:25.546183094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 24 23:35:25.564876 containerd[1467]: time="2026-04-24T23:35:25.564776717Z" level=info msg="CreateContainer within sandbox \"b7958ffbac67856347efd44ea7d739ce3abf8fe027a3beb0f0f3b9d05d086306\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 24 23:35:25.585227 containerd[1467]: time="2026-04-24T23:35:25.585140495Z" level=info msg="CreateContainer within sandbox \"b7958ffbac67856347efd44ea7d739ce3abf8fe027a3beb0f0f3b9d05d086306\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"53eb6e67eeedee9131b387ddb730167e27dee32dc51a21d53f9654d30e2daaf2\"" Apr 24 23:35:25.586864 containerd[1467]: time="2026-04-24T23:35:25.585865941Z" level=info msg="StartContainer for \"53eb6e67eeedee9131b387ddb730167e27dee32dc51a21d53f9654d30e2daaf2\"" Apr 24 23:35:25.653810 systemd[1]: Started cri-containerd-53eb6e67eeedee9131b387ddb730167e27dee32dc51a21d53f9654d30e2daaf2.scope - libcontainer container 53eb6e67eeedee9131b387ddb730167e27dee32dc51a21d53f9654d30e2daaf2. Apr 24 23:35:25.718073 containerd[1467]: time="2026-04-24T23:35:25.718029820Z" level=info msg="StartContainer for \"53eb6e67eeedee9131b387ddb730167e27dee32dc51a21d53f9654d30e2daaf2\" returns successfully" Apr 24 23:35:25.990123 kubelet[2551]: I0424 23:35:25.989750 2551 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-ccd877f8d-t49xs" podStartSLOduration=16.794327586 podStartE2EDuration="20.989738846s" podCreationTimestamp="2026-04-24 23:35:05 +0000 UTC" firstStartedPulling="2026-04-24 23:35:18.363025536 +0000 UTC m=+30.701222068" lastFinishedPulling="2026-04-24 23:35:22.558436806 +0000 UTC m=+34.896633328" observedRunningTime="2026-04-24 23:35:23.008189978 +0000 UTC m=+35.346386500" watchObservedRunningTime="2026-04-24 23:35:25.989738846 +0000 UTC m=+38.327935378" Apr 24 23:35:26.040073 kubelet[2551]: I0424 23:35:26.040002 2551 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-78fd48d56d-w6wzs" podStartSLOduration=14.005805169 podStartE2EDuration="21.039990941s" podCreationTimestamp="2026-04-24 23:35:05 +0000 UTC" firstStartedPulling="2026-04-24 23:35:18.510866584 +0000 UTC m=+30.849063116" lastFinishedPulling="2026-04-24 23:35:25.545052366 +0000 UTC m=+37.883248888" observedRunningTime="2026-04-24 23:35:25.991382044 +0000 UTC m=+38.329578566" watchObservedRunningTime="2026-04-24 23:35:26.039990941 +0000 UTC m=+38.378187463" Apr 24 23:35:26.337329 containerd[1467]: time="2026-04-24T23:35:26.335883816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:26.337329 containerd[1467]: time="2026-04-24T23:35:26.336856548Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 24 23:35:26.337503 containerd[1467]: time="2026-04-24T23:35:26.337388019Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:26.339209 containerd[1467]: time="2026-04-24T23:35:26.339182906Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:26.339912 containerd[1467]: time="2026-04-24T23:35:26.339872933Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 793.63734ms" Apr 24 23:35:26.339912 containerd[1467]: time="2026-04-24T23:35:26.339909942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 24 23:35:26.342907 containerd[1467]: time="2026-04-24T23:35:26.342818209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 24 23:35:26.344785 containerd[1467]: time="2026-04-24T23:35:26.344629445Z" level=info msg="CreateContainer within sandbox \"933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 24 23:35:26.353379 containerd[1467]: time="2026-04-24T23:35:26.353339395Z" level=info msg="CreateContainer within sandbox \"933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae\"" Apr 24 23:35:26.354087 containerd[1467]: time="2026-04-24T23:35:26.354033052Z" level=info msg="StartContainer for \"f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae\"" Apr 24 23:35:26.382995 systemd[1]: Started cri-containerd-f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae.scope - libcontainer container f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae. Apr 24 23:35:26.435139 containerd[1467]: time="2026-04-24T23:35:26.435069738Z" level=info msg="StartContainer for \"f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae\" returns successfully" Apr 24 23:35:27.217818 containerd[1467]: time="2026-04-24T23:35:27.217735901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:27.219481 containerd[1467]: time="2026-04-24T23:35:27.219427401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 24 23:35:27.220295 containerd[1467]: time="2026-04-24T23:35:27.220261086Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:27.222729 containerd[1467]: time="2026-04-24T23:35:27.222691963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:27.223885 containerd[1467]: time="2026-04-24T23:35:27.223743345Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 880.840708ms" Apr 24 23:35:27.223885 containerd[1467]: time="2026-04-24T23:35:27.223771844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 24 23:35:27.226440 containerd[1467]: time="2026-04-24T23:35:27.226409687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 24 23:35:27.231159 containerd[1467]: time="2026-04-24T23:35:27.231117974Z" level=info msg="CreateContainer within sandbox \"c8dbe5ef582c0aae78326859dc32f27b6250e1d837b6917ab4e314abe8e34ebd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 24 23:35:27.244548 containerd[1467]: time="2026-04-24T23:35:27.244392439Z" level=info msg="CreateContainer within sandbox \"c8dbe5ef582c0aae78326859dc32f27b6250e1d837b6917ab4e314abe8e34ebd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d2cd02b950bb1b7c795bfc7cf179cd6b4c1b7378690729601286d041210d5fa8\"" Apr 24 23:35:27.247221 containerd[1467]: time="2026-04-24T23:35:27.247152961Z" level=info msg="StartContainer for \"d2cd02b950bb1b7c795bfc7cf179cd6b4c1b7378690729601286d041210d5fa8\"" Apr 24 23:35:27.287898 systemd[1]: Started cri-containerd-d2cd02b950bb1b7c795bfc7cf179cd6b4c1b7378690729601286d041210d5fa8.scope - libcontainer container d2cd02b950bb1b7c795bfc7cf179cd6b4c1b7378690729601286d041210d5fa8. Apr 24 23:35:27.321795 containerd[1467]: time="2026-04-24T23:35:27.321716721Z" level=info msg="StartContainer for \"d2cd02b950bb1b7c795bfc7cf179cd6b4c1b7378690729601286d041210d5fa8\" returns successfully" Apr 24 23:35:28.300075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3836570086.mount: Deactivated successfully. Apr 24 23:35:28.313408 containerd[1467]: time="2026-04-24T23:35:28.313343916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:28.314282 containerd[1467]: time="2026-04-24T23:35:28.314246410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 24 23:35:28.316728 containerd[1467]: time="2026-04-24T23:35:28.315198654Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:28.319965 containerd[1467]: time="2026-04-24T23:35:28.319937114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:28.322804 containerd[1467]: time="2026-04-24T23:35:28.322748376Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.096307389s" Apr 24 23:35:28.322916 containerd[1467]: time="2026-04-24T23:35:28.322890173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 24 23:35:28.324921 containerd[1467]: time="2026-04-24T23:35:28.324898899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 24 23:35:28.327935 containerd[1467]: time="2026-04-24T23:35:28.327904998Z" level=info msg="CreateContainer within sandbox \"933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 24 23:35:28.348962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1359930352.mount: Deactivated successfully. Apr 24 23:35:28.351336 containerd[1467]: time="2026-04-24T23:35:28.351005466Z" level=info msg="CreateContainer within sandbox \"933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c\"" Apr 24 23:35:28.353104 containerd[1467]: time="2026-04-24T23:35:28.353068871Z" level=info msg="StartContainer for \"0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c\"" Apr 24 23:35:28.408826 systemd[1]: Started cri-containerd-0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c.scope - libcontainer container 0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c. Apr 24 23:35:28.465788 containerd[1467]: time="2026-04-24T23:35:28.465535939Z" level=info msg="StartContainer for \"0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c\" returns successfully" Apr 24 23:35:28.991649 containerd[1467]: time="2026-04-24T23:35:28.991604788Z" level=info msg="StopContainer for \"0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c\" with timeout 30 (s)" Apr 24 23:35:28.991832 containerd[1467]: time="2026-04-24T23:35:28.991774355Z" level=info msg="StopContainer for \"f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae\" with timeout 30 (s)" Apr 24 23:35:28.992130 containerd[1467]: time="2026-04-24T23:35:28.992097889Z" level=info msg="Stop container \"0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c\" with signal terminated" Apr 24 23:35:28.993890 containerd[1467]: time="2026-04-24T23:35:28.993745491Z" level=info msg="Stop container \"f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae\" with signal terminated" Apr 24 23:35:29.015658 kubelet[2551]: I0424 23:35:29.015370 2551 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-74b6665b9d-d5ddg" podStartSLOduration=12.206428063 podStartE2EDuration="22.015356604s" podCreationTimestamp="2026-04-24 23:35:07 +0000 UTC" firstStartedPulling="2026-04-24 23:35:18.514979815 +0000 UTC m=+30.853176347" lastFinishedPulling="2026-04-24 23:35:28.323908366 +0000 UTC m=+40.662104888" observedRunningTime="2026-04-24 23:35:29.01187407 +0000 UTC m=+41.350070592" watchObservedRunningTime="2026-04-24 23:35:29.015356604 +0000 UTC m=+41.353553136" Apr 24 23:35:29.023043 systemd[1]: cri-containerd-0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c.scope: Deactivated successfully. Apr 24 23:35:29.072521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c-rootfs.mount: Deactivated successfully. Apr 24 23:35:29.079429 systemd[1]: cri-containerd-f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae.scope: Deactivated successfully. Apr 24 23:35:29.085859 containerd[1467]: time="2026-04-24T23:35:29.085659535Z" level=info msg="shim disconnected" id=0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c namespace=k8s.io Apr 24 23:35:29.085859 containerd[1467]: time="2026-04-24T23:35:29.085729363Z" level=warning msg="cleaning up after shim disconnected" id=0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c namespace=k8s.io Apr 24 23:35:29.085859 containerd[1467]: time="2026-04-24T23:35:29.085738793Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:35:29.112614 containerd[1467]: time="2026-04-24T23:35:29.112542135Z" level=warning msg="cleanup warnings time=\"2026-04-24T23:35:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 24 23:35:29.123707 containerd[1467]: time="2026-04-24T23:35:29.122986895Z" level=info msg="shim disconnected" id=f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae namespace=k8s.io Apr 24 23:35:29.123707 containerd[1467]: time="2026-04-24T23:35:29.123042484Z" level=warning msg="cleaning up after shim disconnected" id=f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae namespace=k8s.io Apr 24 23:35:29.123707 containerd[1467]: time="2026-04-24T23:35:29.123051624Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:35:29.130593 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae-rootfs.mount: Deactivated successfully. Apr 24 23:35:29.138335 containerd[1467]: time="2026-04-24T23:35:29.138298864Z" level=info msg="StopContainer for \"0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c\" returns successfully" Apr 24 23:35:29.150222 containerd[1467]: time="2026-04-24T23:35:29.150126371Z" level=warning msg="cleanup warnings time=\"2026-04-24T23:35:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 24 23:35:29.162062 containerd[1467]: time="2026-04-24T23:35:29.161934558Z" level=info msg="StopContainer for \"f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae\" returns successfully" Apr 24 23:35:29.163034 containerd[1467]: time="2026-04-24T23:35:29.162770014Z" level=info msg="StopPodSandbox for \"933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068\"" Apr 24 23:35:29.163034 containerd[1467]: time="2026-04-24T23:35:29.162875933Z" level=info msg="Container to stop \"f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 23:35:29.163034 containerd[1467]: time="2026-04-24T23:35:29.162888112Z" level=info msg="Container to stop \"0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 23:35:29.172449 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068-shm.mount: Deactivated successfully. Apr 24 23:35:29.195385 systemd[1]: cri-containerd-933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068.scope: Deactivated successfully. Apr 24 23:35:29.276789 containerd[1467]: time="2026-04-24T23:35:29.276584224Z" level=info msg="shim disconnected" id=933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068 namespace=k8s.io Apr 24 23:35:29.276789 containerd[1467]: time="2026-04-24T23:35:29.276641143Z" level=warning msg="cleaning up after shim disconnected" id=933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068 namespace=k8s.io Apr 24 23:35:29.276789 containerd[1467]: time="2026-04-24T23:35:29.276650423Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:35:29.278634 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068-rootfs.mount: Deactivated successfully. Apr 24 23:35:29.372976 systemd-networkd[1388]: cali5252b73c6d4: Link DOWN Apr 24 23:35:29.372991 systemd-networkd[1388]: cali5252b73c6d4: Lost carrier Apr 24 23:35:29.493923 containerd[1467]: 2026-04-24 23:35:29.367 [INFO][5008] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Apr 24 23:35:29.493923 containerd[1467]: 2026-04-24 23:35:29.368 [INFO][5008] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" iface="eth0" netns="/var/run/netns/cni-af3b4bf3-7b19-b27e-3b49-d3006cb066c0" Apr 24 23:35:29.493923 containerd[1467]: 2026-04-24 23:35:29.369 [INFO][5008] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" iface="eth0" netns="/var/run/netns/cni-af3b4bf3-7b19-b27e-3b49-d3006cb066c0" Apr 24 23:35:29.493923 containerd[1467]: 2026-04-24 23:35:29.380 [INFO][5008] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" after=11.871296ms iface="eth0" netns="/var/run/netns/cni-af3b4bf3-7b19-b27e-3b49-d3006cb066c0" Apr 24 23:35:29.493923 containerd[1467]: 2026-04-24 23:35:29.380 [INFO][5008] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Apr 24 23:35:29.493923 containerd[1467]: 2026-04-24 23:35:29.380 [INFO][5008] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Apr 24 23:35:29.493923 containerd[1467]: 2026-04-24 23:35:29.435 [INFO][5018] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" HandleID="k8s-pod-network.933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Workload="172--238--161--65-k8s-whisker--74b6665b9d--d5ddg-eth0" Apr 24 23:35:29.493923 containerd[1467]: 2026-04-24 23:35:29.435 [INFO][5018] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:35:29.493923 containerd[1467]: 2026-04-24 23:35:29.435 [INFO][5018] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:35:29.493923 containerd[1467]: 2026-04-24 23:35:29.483 [INFO][5018] ipam/ipam_plugin.go 516: Released address using handleID ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" HandleID="k8s-pod-network.933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Workload="172--238--161--65-k8s-whisker--74b6665b9d--d5ddg-eth0" Apr 24 23:35:29.493923 containerd[1467]: 2026-04-24 23:35:29.483 [INFO][5018] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" HandleID="k8s-pod-network.933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Workload="172--238--161--65-k8s-whisker--74b6665b9d--d5ddg-eth0" Apr 24 23:35:29.493923 containerd[1467]: 2026-04-24 23:35:29.486 [INFO][5018] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:35:29.493923 containerd[1467]: 2026-04-24 23:35:29.490 [INFO][5008] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Apr 24 23:35:29.495334 containerd[1467]: time="2026-04-24T23:35:29.494300916Z" level=info msg="TearDown network for sandbox \"933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068\" successfully" Apr 24 23:35:29.495334 containerd[1467]: time="2026-04-24T23:35:29.494333086Z" level=info msg="StopPodSandbox for \"933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068\" returns successfully" Apr 24 23:35:29.552375 containerd[1467]: time="2026-04-24T23:35:29.551088148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:29.553091 containerd[1467]: time="2026-04-24T23:35:29.553056716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 24 23:35:29.553858 containerd[1467]: time="2026-04-24T23:35:29.553837913Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:29.556013 containerd[1467]: time="2026-04-24T23:35:29.555992838Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:29.557756 containerd[1467]: time="2026-04-24T23:35:29.557732260Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.232803141s" Apr 24 23:35:29.557835 containerd[1467]: time="2026-04-24T23:35:29.557820138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 24 23:35:29.565572 containerd[1467]: time="2026-04-24T23:35:29.565476633Z" level=info msg="CreateContainer within sandbox \"c8dbe5ef582c0aae78326859dc32f27b6250e1d837b6917ab4e314abe8e34ebd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 24 23:35:29.583480 systemd[1]: Created slice kubepods-besteffort-pod1664745d_0873_4d78_8d14_f66847286465.slice - libcontainer container kubepods-besteffort-pod1664745d_0873_4d78_8d14_f66847286465.slice. Apr 24 23:35:29.589346 containerd[1467]: time="2026-04-24T23:35:29.587074220Z" level=info msg="CreateContainer within sandbox \"c8dbe5ef582c0aae78326859dc32f27b6250e1d837b6917ab4e314abe8e34ebd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f7aa3b911f85afcf0edd23f9e50ffcc77f5e07232c7f14cc0a3d499b42b337b1\"" Apr 24 23:35:29.589346 containerd[1467]: time="2026-04-24T23:35:29.588260701Z" level=info msg="StartContainer for \"f7aa3b911f85afcf0edd23f9e50ffcc77f5e07232c7f14cc0a3d499b42b337b1\"" Apr 24 23:35:29.633869 systemd[1]: Started cri-containerd-f7aa3b911f85afcf0edd23f9e50ffcc77f5e07232c7f14cc0a3d499b42b337b1.scope - libcontainer container f7aa3b911f85afcf0edd23f9e50ffcc77f5e07232c7f14cc0a3d499b42b337b1. Apr 24 23:35:29.667303 containerd[1467]: time="2026-04-24T23:35:29.667249270Z" level=info msg="StartContainer for \"f7aa3b911f85afcf0edd23f9e50ffcc77f5e07232c7f14cc0a3d499b42b337b1\" returns successfully" Apr 24 23:35:29.675032 kubelet[2551]: I0424 23:35:29.673728 2551 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/c5f4d09e-c860-46dc-87d1-e132c91058f2-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5f4d09e-c860-46dc-87d1-e132c91058f2-whisker-ca-bundle\") pod \"c5f4d09e-c860-46dc-87d1-e132c91058f2\" (UID: \"c5f4d09e-c860-46dc-87d1-e132c91058f2\") " Apr 24 23:35:29.675032 kubelet[2551]: I0424 23:35:29.673777 2551 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/c5f4d09e-c860-46dc-87d1-e132c91058f2-nginx-config\" (UniqueName: \"kubernetes.io/configmap/c5f4d09e-c860-46dc-87d1-e132c91058f2-nginx-config\") pod \"c5f4d09e-c860-46dc-87d1-e132c91058f2\" (UID: \"c5f4d09e-c860-46dc-87d1-e132c91058f2\") " Apr 24 23:35:29.675032 kubelet[2551]: I0424 23:35:29.673803 2551 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/c5f4d09e-c860-46dc-87d1-e132c91058f2-kube-api-access-wvlpx\" (UniqueName: \"kubernetes.io/projected/c5f4d09e-c860-46dc-87d1-e132c91058f2-kube-api-access-wvlpx\") pod \"c5f4d09e-c860-46dc-87d1-e132c91058f2\" (UID: \"c5f4d09e-c860-46dc-87d1-e132c91058f2\") " Apr 24 23:35:29.675032 kubelet[2551]: I0424 23:35:29.673821 2551 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/c5f4d09e-c860-46dc-87d1-e132c91058f2-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c5f4d09e-c860-46dc-87d1-e132c91058f2-whisker-backend-key-pair\") pod \"c5f4d09e-c860-46dc-87d1-e132c91058f2\" (UID: \"c5f4d09e-c860-46dc-87d1-e132c91058f2\") " Apr 24 23:35:29.675032 kubelet[2551]: I0424 23:35:29.673881 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/1664745d-0873-4d78-8d14-f66847286465-nginx-config\") pod \"whisker-65d98d9f75-29qr2\" (UID: \"1664745d-0873-4d78-8d14-f66847286465\") " pod="calico-system/whisker-65d98d9f75-29qr2" Apr 24 23:35:29.675306 kubelet[2551]: I0424 23:35:29.673904 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1664745d-0873-4d78-8d14-f66847286465-whisker-ca-bundle\") pod \"whisker-65d98d9f75-29qr2\" (UID: \"1664745d-0873-4d78-8d14-f66847286465\") " pod="calico-system/whisker-65d98d9f75-29qr2" Apr 24 23:35:29.675306 kubelet[2551]: I0424 23:35:29.673945 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1664745d-0873-4d78-8d14-f66847286465-whisker-backend-key-pair\") pod \"whisker-65d98d9f75-29qr2\" (UID: \"1664745d-0873-4d78-8d14-f66847286465\") " pod="calico-system/whisker-65d98d9f75-29qr2" Apr 24 23:35:29.675306 kubelet[2551]: I0424 23:35:29.673962 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgh7k\" (UniqueName: \"kubernetes.io/projected/1664745d-0873-4d78-8d14-f66847286465-kube-api-access-pgh7k\") pod \"whisker-65d98d9f75-29qr2\" (UID: \"1664745d-0873-4d78-8d14-f66847286465\") " pod="calico-system/whisker-65d98d9f75-29qr2" Apr 24 23:35:29.675306 kubelet[2551]: I0424 23:35:29.674426 2551 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f4d09e-c860-46dc-87d1-e132c91058f2-whisker-ca-bundle" pod "c5f4d09e-c860-46dc-87d1-e132c91058f2" (UID: "c5f4d09e-c860-46dc-87d1-e132c91058f2"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:35:29.675306 kubelet[2551]: I0424 23:35:29.674779 2551 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f4d09e-c860-46dc-87d1-e132c91058f2-nginx-config" pod "c5f4d09e-c860-46dc-87d1-e132c91058f2" (UID: "c5f4d09e-c860-46dc-87d1-e132c91058f2"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:35:29.679326 kubelet[2551]: I0424 23:35:29.679305 2551 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f4d09e-c860-46dc-87d1-e132c91058f2-whisker-backend-key-pair" pod "c5f4d09e-c860-46dc-87d1-e132c91058f2" (UID: "c5f4d09e-c860-46dc-87d1-e132c91058f2"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 23:35:29.680895 kubelet[2551]: I0424 23:35:29.680850 2551 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f4d09e-c860-46dc-87d1-e132c91058f2-kube-api-access-wvlpx" pod "c5f4d09e-c860-46dc-87d1-e132c91058f2" (UID: "c5f4d09e-c860-46dc-87d1-e132c91058f2"). InnerVolumeSpecName "kube-api-access-wvlpx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 23:35:29.774706 kubelet[2551]: I0424 23:35:29.774629 2551 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c5f4d09e-c860-46dc-87d1-e132c91058f2-whisker-backend-key-pair\") on node \"172-238-161-65\" DevicePath \"\"" Apr 24 23:35:29.774706 kubelet[2551]: I0424 23:35:29.774665 2551 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c5f4d09e-c860-46dc-87d1-e132c91058f2-nginx-config\") on node \"172-238-161-65\" DevicePath \"\"" Apr 24 23:35:29.774706 kubelet[2551]: I0424 23:35:29.774712 2551 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wvlpx\" (UniqueName: \"kubernetes.io/projected/c5f4d09e-c860-46dc-87d1-e132c91058f2-kube-api-access-wvlpx\") on node \"172-238-161-65\" DevicePath \"\"" Apr 24 23:35:29.774706 kubelet[2551]: I0424 23:35:29.774726 2551 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5f4d09e-c860-46dc-87d1-e132c91058f2-whisker-ca-bundle\") on node \"172-238-161-65\" DevicePath \"\"" Apr 24 23:35:29.798577 systemd[1]: Removed slice kubepods-besteffort-podc5f4d09e_c860_46dc_87d1_e132c91058f2.slice - libcontainer container kubepods-besteffort-podc5f4d09e_c860_46dc_87d1_e132c91058f2.slice. Apr 24 23:35:29.859074 kubelet[2551]: I0424 23:35:29.859050 2551 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 24 23:35:29.859854 kubelet[2551]: I0424 23:35:29.859247 2551 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 24 23:35:29.895331 containerd[1467]: time="2026-04-24T23:35:29.895245154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65d98d9f75-29qr2,Uid:1664745d-0873-4d78-8d14-f66847286465,Namespace:calico-system,Attempt:0,}" Apr 24 23:35:29.998489 kubelet[2551]: I0424 23:35:29.998457 2551 scope.go:122] "RemoveContainer" containerID="0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c" Apr 24 23:35:30.003290 containerd[1467]: time="2026-04-24T23:35:30.002284905Z" level=info msg="RemoveContainer for \"0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c\"" Apr 24 23:35:30.019824 containerd[1467]: time="2026-04-24T23:35:30.019128480Z" level=info msg="RemoveContainer for \"0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c\" returns successfully" Apr 24 23:35:30.022742 kubelet[2551]: I0424 23:35:30.022718 2551 scope.go:122] "RemoveContainer" containerID="f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae" Apr 24 23:35:30.031775 containerd[1467]: time="2026-04-24T23:35:30.030851636Z" level=info msg="RemoveContainer for \"f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae\"" Apr 24 23:35:30.033746 systemd-networkd[1388]: cali4d3ed94ceb3: Link UP Apr 24 23:35:30.035182 systemd-networkd[1388]: cali4d3ed94ceb3: Gained carrier Apr 24 23:35:30.057046 systemd[1]: run-netns-cni\x2daf3b4bf3\x2d7b19\x2db27e\x2d3b49\x2dd3006cb066c0.mount: Deactivated successfully. Apr 24 23:35:30.057160 systemd[1]: var-lib-kubelet-pods-c5f4d09e\x2dc860\x2d46dc\x2d87d1\x2de132c91058f2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwvlpx.mount: Deactivated successfully. Apr 24 23:35:30.057237 systemd[1]: var-lib-kubelet-pods-c5f4d09e\x2dc860\x2d46dc\x2d87d1\x2de132c91058f2-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 24 23:35:30.068336 kubelet[2551]: I0424 23:35:30.067780 2551 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-hqhqx" podStartSLOduration=14.089224179 podStartE2EDuration="25.067760825s" podCreationTimestamp="2026-04-24 23:35:05 +0000 UTC" firstStartedPulling="2026-04-24 23:35:18.581361898 +0000 UTC m=+30.919558420" lastFinishedPulling="2026-04-24 23:35:29.559898544 +0000 UTC m=+41.898095066" observedRunningTime="2026-04-24 23:35:30.042394564 +0000 UTC m=+42.380591096" watchObservedRunningTime="2026-04-24 23:35:30.067760825 +0000 UTC m=+42.405957347" Apr 24 23:35:30.068492 containerd[1467]: time="2026-04-24T23:35:30.068301556Z" level=info msg="RemoveContainer for \"f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae\" returns successfully" Apr 24 23:35:30.070217 kubelet[2551]: I0424 23:35:30.070198 2551 scope.go:122] "RemoveContainer" containerID="0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c" Apr 24 23:35:30.070938 containerd[1467]: time="2026-04-24T23:35:30.070861706Z" level=error msg="ContainerStatus for \"0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c\": not found" Apr 24 23:35:30.071361 kubelet[2551]: E0424 23:35:30.071063 2551 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c\": not found" containerID="0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c" Apr 24 23:35:30.071361 kubelet[2551]: I0424 23:35:30.071095 2551 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c"} err="failed to get container status \"0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c\": rpc error: code = NotFound desc = an error occurred when try to find container \"0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c\": not found" Apr 24 23:35:30.071361 kubelet[2551]: I0424 23:35:30.071136 2551 scope.go:122] "RemoveContainer" containerID="f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae" Apr 24 23:35:30.071460 containerd[1467]: time="2026-04-24T23:35:30.071315859Z" level=error msg="ContainerStatus for \"f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae\": not found" Apr 24 23:35:30.072844 kubelet[2551]: E0424 23:35:30.072719 2551 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae\": not found" containerID="f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae" Apr 24 23:35:30.072844 kubelet[2551]: I0424 23:35:30.072755 2551 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae"} err="failed to get container status \"f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae\": rpc error: code = NotFound desc = an error occurred when try to find container \"f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae\": not found" Apr 24 23:35:30.072844 kubelet[2551]: I0424 23:35:30.072788 2551 scope.go:122] "RemoveContainer" containerID="0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c" Apr 24 23:35:30.073704 containerd[1467]: time="2026-04-24T23:35:30.073385326Z" level=error msg="ContainerStatus for \"0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c\": not found" Apr 24 23:35:30.073704 containerd[1467]: time="2026-04-24T23:35:30.073647842Z" level=error msg="ContainerStatus for \"f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae\": not found" Apr 24 23:35:30.073781 kubelet[2551]: I0424 23:35:30.073474 2551 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c"} err="failed to get container status \"0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c\": rpc error: code = NotFound desc = an error occurred when try to find container \"0bee050f1a525cec67d165fd82577311323a563fbcfdf0771d01828b30902d2c\": not found" Apr 24 23:35:30.073781 kubelet[2551]: I0424 23:35:30.073489 2551 scope.go:122] "RemoveContainer" containerID="f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae" Apr 24 23:35:30.073948 kubelet[2551]: I0424 23:35:30.073898 2551 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae"} err="failed to get container status \"f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae\": rpc error: code = NotFound desc = an error occurred when try to find container \"f59eb221f4a54408d4d876a2e775b56ac403a43ba7ecdbad8df945d3201bdcae\": not found" Apr 24 23:35:30.075383 containerd[1467]: 2026-04-24 23:35:29.927 [ERROR][5069] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:35:30.075383 containerd[1467]: 2026-04-24 23:35:29.938 [INFO][5069] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--161--65-k8s-whisker--65d98d9f75--29qr2-eth0 whisker-65d98d9f75- calico-system 1664745d-0873-4d78-8d14-f66847286465 1069 0 2026-04-24 23:35:29 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:65d98d9f75 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-238-161-65 whisker-65d98d9f75-29qr2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4d3ed94ceb3 [] [] }} ContainerID="334fb309d4aeeb017caee32e72129a9f3e739994bc9a081aeb5830e4ed1e3265" Namespace="calico-system" Pod="whisker-65d98d9f75-29qr2" WorkloadEndpoint="172--238--161--65-k8s-whisker--65d98d9f75--29qr2-" Apr 24 23:35:30.075383 containerd[1467]: 2026-04-24 23:35:29.938 [INFO][5069] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="334fb309d4aeeb017caee32e72129a9f3e739994bc9a081aeb5830e4ed1e3265" Namespace="calico-system" Pod="whisker-65d98d9f75-29qr2" WorkloadEndpoint="172--238--161--65-k8s-whisker--65d98d9f75--29qr2-eth0" Apr 24 23:35:30.075383 containerd[1467]: 2026-04-24 23:35:29.963 [INFO][5080] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="334fb309d4aeeb017caee32e72129a9f3e739994bc9a081aeb5830e4ed1e3265" HandleID="k8s-pod-network.334fb309d4aeeb017caee32e72129a9f3e739994bc9a081aeb5830e4ed1e3265" Workload="172--238--161--65-k8s-whisker--65d98d9f75--29qr2-eth0" Apr 24 23:35:30.075383 containerd[1467]: 2026-04-24 23:35:29.974 [INFO][5080] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="334fb309d4aeeb017caee32e72129a9f3e739994bc9a081aeb5830e4ed1e3265" HandleID="k8s-pod-network.334fb309d4aeeb017caee32e72129a9f3e739994bc9a081aeb5830e4ed1e3265" Workload="172--238--161--65-k8s-whisker--65d98d9f75--29qr2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000407dc0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-161-65", "pod":"whisker-65d98d9f75-29qr2", "timestamp":"2026-04-24 23:35:29.963955511 +0000 UTC"}, Hostname:"172-238-161-65", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003bfb80)} Apr 24 23:35:30.075383 containerd[1467]: 2026-04-24 23:35:29.974 [INFO][5080] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:35:30.075383 containerd[1467]: 2026-04-24 23:35:29.974 [INFO][5080] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:35:30.075383 containerd[1467]: 2026-04-24 23:35:29.974 [INFO][5080] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-161-65' Apr 24 23:35:30.075383 containerd[1467]: 2026-04-24 23:35:29.976 [INFO][5080] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.334fb309d4aeeb017caee32e72129a9f3e739994bc9a081aeb5830e4ed1e3265" host="172-238-161-65" Apr 24 23:35:30.075383 containerd[1467]: 2026-04-24 23:35:29.980 [INFO][5080] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-161-65" Apr 24 23:35:30.075383 containerd[1467]: 2026-04-24 23:35:29.984 [INFO][5080] ipam/ipam.go 526: Trying affinity for 192.168.40.0/26 host="172-238-161-65" Apr 24 23:35:30.075383 containerd[1467]: 2026-04-24 23:35:29.986 [INFO][5080] ipam/ipam.go 160: Attempting to load block cidr=192.168.40.0/26 host="172-238-161-65" Apr 24 23:35:30.075383 containerd[1467]: 2026-04-24 23:35:29.988 [INFO][5080] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="172-238-161-65" Apr 24 23:35:30.075383 containerd[1467]: 2026-04-24 23:35:29.988 [INFO][5080] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.334fb309d4aeeb017caee32e72129a9f3e739994bc9a081aeb5830e4ed1e3265" host="172-238-161-65" Apr 24 23:35:30.075383 containerd[1467]: 2026-04-24 23:35:29.990 [INFO][5080] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.334fb309d4aeeb017caee32e72129a9f3e739994bc9a081aeb5830e4ed1e3265 Apr 24 23:35:30.075383 containerd[1467]: 2026-04-24 23:35:29.995 [INFO][5080] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.334fb309d4aeeb017caee32e72129a9f3e739994bc9a081aeb5830e4ed1e3265" host="172-238-161-65" Apr 24 23:35:30.075383 containerd[1467]: 2026-04-24 23:35:30.010 [INFO][5080] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.40.9/26] block=192.168.40.0/26 handle="k8s-pod-network.334fb309d4aeeb017caee32e72129a9f3e739994bc9a081aeb5830e4ed1e3265" host="172-238-161-65" Apr 24 23:35:30.075383 containerd[1467]: 2026-04-24 23:35:30.010 [INFO][5080] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.40.9/26] handle="k8s-pod-network.334fb309d4aeeb017caee32e72129a9f3e739994bc9a081aeb5830e4ed1e3265" host="172-238-161-65" Apr 24 23:35:30.075383 containerd[1467]: 2026-04-24 23:35:30.010 [INFO][5080] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:35:30.075383 containerd[1467]: 2026-04-24 23:35:30.011 [INFO][5080] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.40.9/26] IPv6=[] ContainerID="334fb309d4aeeb017caee32e72129a9f3e739994bc9a081aeb5830e4ed1e3265" HandleID="k8s-pod-network.334fb309d4aeeb017caee32e72129a9f3e739994bc9a081aeb5830e4ed1e3265" Workload="172--238--161--65-k8s-whisker--65d98d9f75--29qr2-eth0" Apr 24 23:35:30.075941 containerd[1467]: 2026-04-24 23:35:30.027 [INFO][5069] cni-plugin/k8s.go 418: Populated endpoint ContainerID="334fb309d4aeeb017caee32e72129a9f3e739994bc9a081aeb5830e4ed1e3265" Namespace="calico-system" Pod="whisker-65d98d9f75-29qr2" WorkloadEndpoint="172--238--161--65-k8s-whisker--65d98d9f75--29qr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--65-k8s-whisker--65d98d9f75--29qr2-eth0", GenerateName:"whisker-65d98d9f75-", Namespace:"calico-system", SelfLink:"", UID:"1664745d-0873-4d78-8d14-f66847286465", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 35, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65d98d9f75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-65", ContainerID:"", Pod:"whisker-65d98d9f75-29qr2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.40.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4d3ed94ceb3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:35:30.075941 containerd[1467]: 2026-04-24 23:35:30.027 [INFO][5069] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.9/32] ContainerID="334fb309d4aeeb017caee32e72129a9f3e739994bc9a081aeb5830e4ed1e3265" Namespace="calico-system" Pod="whisker-65d98d9f75-29qr2" WorkloadEndpoint="172--238--161--65-k8s-whisker--65d98d9f75--29qr2-eth0" Apr 24 23:35:30.075941 containerd[1467]: 2026-04-24 23:35:30.027 [INFO][5069] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4d3ed94ceb3 ContainerID="334fb309d4aeeb017caee32e72129a9f3e739994bc9a081aeb5830e4ed1e3265" Namespace="calico-system" Pod="whisker-65d98d9f75-29qr2" WorkloadEndpoint="172--238--161--65-k8s-whisker--65d98d9f75--29qr2-eth0" Apr 24 23:35:30.075941 containerd[1467]: 2026-04-24 23:35:30.033 [INFO][5069] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="334fb309d4aeeb017caee32e72129a9f3e739994bc9a081aeb5830e4ed1e3265" Namespace="calico-system" Pod="whisker-65d98d9f75-29qr2" WorkloadEndpoint="172--238--161--65-k8s-whisker--65d98d9f75--29qr2-eth0" Apr 24 23:35:30.075941 containerd[1467]: 2026-04-24 23:35:30.036 [INFO][5069] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="334fb309d4aeeb017caee32e72129a9f3e739994bc9a081aeb5830e4ed1e3265" Namespace="calico-system" Pod="whisker-65d98d9f75-29qr2" WorkloadEndpoint="172--238--161--65-k8s-whisker--65d98d9f75--29qr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--65-k8s-whisker--65d98d9f75--29qr2-eth0", GenerateName:"whisker-65d98d9f75-", Namespace:"calico-system", SelfLink:"", UID:"1664745d-0873-4d78-8d14-f66847286465", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 35, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65d98d9f75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-65", ContainerID:"334fb309d4aeeb017caee32e72129a9f3e739994bc9a081aeb5830e4ed1e3265", Pod:"whisker-65d98d9f75-29qr2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.40.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4d3ed94ceb3", MAC:"9a:02:d5:51:70:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:35:30.075941 containerd[1467]: 2026-04-24 23:35:30.068 [INFO][5069] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="334fb309d4aeeb017caee32e72129a9f3e739994bc9a081aeb5830e4ed1e3265" Namespace="calico-system" Pod="whisker-65d98d9f75-29qr2" WorkloadEndpoint="172--238--161--65-k8s-whisker--65d98d9f75--29qr2-eth0" Apr 24 23:35:30.099598 containerd[1467]: time="2026-04-24T23:35:30.099302839Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:35:30.099598 containerd[1467]: time="2026-04-24T23:35:30.099357478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:35:30.099598 containerd[1467]: time="2026-04-24T23:35:30.099368288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:35:30.099598 containerd[1467]: time="2026-04-24T23:35:30.099454896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:35:30.144130 systemd[1]: run-containerd-runc-k8s.io-334fb309d4aeeb017caee32e72129a9f3e739994bc9a081aeb5830e4ed1e3265-runc.eaGgeS.mount: Deactivated successfully. Apr 24 23:35:30.156591 systemd[1]: Started cri-containerd-334fb309d4aeeb017caee32e72129a9f3e739994bc9a081aeb5830e4ed1e3265.scope - libcontainer container 334fb309d4aeeb017caee32e72129a9f3e739994bc9a081aeb5830e4ed1e3265. Apr 24 23:35:30.202618 containerd[1467]: time="2026-04-24T23:35:30.202503005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65d98d9f75-29qr2,Uid:1664745d-0873-4d78-8d14-f66847286465,Namespace:calico-system,Attempt:0,} returns sandbox id \"334fb309d4aeeb017caee32e72129a9f3e739994bc9a081aeb5830e4ed1e3265\"" Apr 24 23:35:30.209196 containerd[1467]: time="2026-04-24T23:35:30.208991493Z" level=info msg="CreateContainer within sandbox \"334fb309d4aeeb017caee32e72129a9f3e739994bc9a081aeb5830e4ed1e3265\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 24 23:35:30.224505 containerd[1467]: time="2026-04-24T23:35:30.224358102Z" level=info msg="CreateContainer within sandbox \"334fb309d4aeeb017caee32e72129a9f3e739994bc9a081aeb5830e4ed1e3265\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"4b3ba5a4837f7f6d7efa19dcde65af84160a54cdf74472c8a04bf08aa2a96054\"" Apr 24 23:35:30.227719 containerd[1467]: time="2026-04-24T23:35:30.226066895Z" level=info msg="StartContainer for \"4b3ba5a4837f7f6d7efa19dcde65af84160a54cdf74472c8a04bf08aa2a96054\"" Apr 24 23:35:30.264857 systemd[1]: Started cri-containerd-4b3ba5a4837f7f6d7efa19dcde65af84160a54cdf74472c8a04bf08aa2a96054.scope - libcontainer container 4b3ba5a4837f7f6d7efa19dcde65af84160a54cdf74472c8a04bf08aa2a96054. Apr 24 23:35:30.343269 containerd[1467]: time="2026-04-24T23:35:30.343225402Z" level=info msg="StartContainer for \"4b3ba5a4837f7f6d7efa19dcde65af84160a54cdf74472c8a04bf08aa2a96054\" returns successfully" Apr 24 23:35:30.349700 containerd[1467]: time="2026-04-24T23:35:30.349647411Z" level=info msg="CreateContainer within sandbox \"334fb309d4aeeb017caee32e72129a9f3e739994bc9a081aeb5830e4ed1e3265\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 24 23:35:30.358269 containerd[1467]: time="2026-04-24T23:35:30.358230655Z" level=info msg="CreateContainer within sandbox \"334fb309d4aeeb017caee32e72129a9f3e739994bc9a081aeb5830e4ed1e3265\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c1ff1123fabe9fb53b56c50290d69892a7dd57bee0881aa26cc5e6b03d684db2\"" Apr 24 23:35:30.360381 containerd[1467]: time="2026-04-24T23:35:30.360269253Z" level=info msg="StartContainer for \"c1ff1123fabe9fb53b56c50290d69892a7dd57bee0881aa26cc5e6b03d684db2\"" Apr 24 23:35:30.406839 systemd[1]: Started cri-containerd-c1ff1123fabe9fb53b56c50290d69892a7dd57bee0881aa26cc5e6b03d684db2.scope - libcontainer container c1ff1123fabe9fb53b56c50290d69892a7dd57bee0881aa26cc5e6b03d684db2. Apr 24 23:35:30.462075 containerd[1467]: time="2026-04-24T23:35:30.461985173Z" level=info msg="StartContainer for \"c1ff1123fabe9fb53b56c50290d69892a7dd57bee0881aa26cc5e6b03d684db2\" returns successfully" Apr 24 23:35:31.027140 kubelet[2551]: I0424 23:35:31.026635 2551 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-65d98d9f75-29qr2" podStartSLOduration=2.026621015 podStartE2EDuration="2.026621015s" podCreationTimestamp="2026-04-24 23:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:35:31.026548476 +0000 UTC m=+43.364744998" watchObservedRunningTime="2026-04-24 23:35:31.026621015 +0000 UTC m=+43.364817537" Apr 24 23:35:31.050879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3916955880.mount: Deactivated successfully. Apr 24 23:35:31.227879 systemd-networkd[1388]: cali4d3ed94ceb3: Gained IPv6LL Apr 24 23:35:31.509722 kubelet[2551]: I0424 23:35:31.509427 2551 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:35:31.786121 kubelet[2551]: I0424 23:35:31.785978 2551 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c5f4d09e-c860-46dc-87d1-e132c91058f2" path="/var/lib/kubelet/pods/c5f4d09e-c860-46dc-87d1-e132c91058f2/volumes" Apr 24 23:35:35.731258 kubelet[2551]: I0424 23:35:35.731047 2551 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:35:35.732971 kubelet[2551]: E0424 23:35:35.732070 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:35:36.028338 kubelet[2551]: E0424 23:35:36.028213 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:35:36.610702 kernel: calico-node[5346]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 24 23:35:37.267872 systemd-networkd[1388]: vxlan.calico: Link UP Apr 24 23:35:37.267884 systemd-networkd[1388]: vxlan.calico: Gained carrier Apr 24 23:35:38.780045 systemd-networkd[1388]: vxlan.calico: Gained IPv6LL Apr 24 23:35:47.757755 containerd[1467]: time="2026-04-24T23:35:47.757606902Z" level=info msg="StopPodSandbox for \"933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068\"" Apr 24 23:35:47.836255 containerd[1467]: 2026-04-24 23:35:47.802 [WARNING][5531] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" WorkloadEndpoint="172--238--161--65-k8s-whisker--74b6665b9d--d5ddg-eth0" Apr 24 23:35:47.836255 containerd[1467]: 2026-04-24 23:35:47.802 [INFO][5531] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Apr 24 23:35:47.836255 containerd[1467]: 2026-04-24 23:35:47.802 [INFO][5531] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" iface="eth0" netns="" Apr 24 23:35:47.836255 containerd[1467]: 2026-04-24 23:35:47.802 [INFO][5531] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Apr 24 23:35:47.836255 containerd[1467]: 2026-04-24 23:35:47.802 [INFO][5531] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Apr 24 23:35:47.836255 containerd[1467]: 2026-04-24 23:35:47.825 [INFO][5540] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" HandleID="k8s-pod-network.933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Workload="172--238--161--65-k8s-whisker--74b6665b9d--d5ddg-eth0" Apr 24 23:35:47.836255 containerd[1467]: 2026-04-24 23:35:47.825 [INFO][5540] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:35:47.836255 containerd[1467]: 2026-04-24 23:35:47.825 [INFO][5540] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:35:47.836255 containerd[1467]: 2026-04-24 23:35:47.830 [WARNING][5540] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" HandleID="k8s-pod-network.933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Workload="172--238--161--65-k8s-whisker--74b6665b9d--d5ddg-eth0" Apr 24 23:35:47.836255 containerd[1467]: 2026-04-24 23:35:47.830 [INFO][5540] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" HandleID="k8s-pod-network.933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Workload="172--238--161--65-k8s-whisker--74b6665b9d--d5ddg-eth0" Apr 24 23:35:47.836255 containerd[1467]: 2026-04-24 23:35:47.832 [INFO][5540] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:35:47.836255 containerd[1467]: 2026-04-24 23:35:47.834 [INFO][5531] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Apr 24 23:35:47.836255 containerd[1467]: time="2026-04-24T23:35:47.836219396Z" level=info msg="TearDown network for sandbox \"933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068\" successfully" Apr 24 23:35:47.836255 containerd[1467]: time="2026-04-24T23:35:47.836242826Z" level=info msg="StopPodSandbox for \"933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068\" returns successfully" Apr 24 23:35:47.836893 containerd[1467]: time="2026-04-24T23:35:47.836860520Z" level=info msg="RemovePodSandbox for \"933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068\"" Apr 24 23:35:47.836933 containerd[1467]: time="2026-04-24T23:35:47.836900009Z" level=info msg="Forcibly stopping sandbox \"933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068\"" Apr 24 23:35:47.908857 containerd[1467]: 2026-04-24 23:35:47.873 [WARNING][5554] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" WorkloadEndpoint="172--238--161--65-k8s-whisker--74b6665b9d--d5ddg-eth0" Apr 24 23:35:47.908857 containerd[1467]: 2026-04-24 23:35:47.874 [INFO][5554] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Apr 24 23:35:47.908857 containerd[1467]: 2026-04-24 23:35:47.874 [INFO][5554] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" iface="eth0" netns="" Apr 24 23:35:47.908857 containerd[1467]: 2026-04-24 23:35:47.874 [INFO][5554] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Apr 24 23:35:47.908857 containerd[1467]: 2026-04-24 23:35:47.874 [INFO][5554] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Apr 24 23:35:47.908857 containerd[1467]: 2026-04-24 23:35:47.897 [INFO][5561] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" HandleID="k8s-pod-network.933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Workload="172--238--161--65-k8s-whisker--74b6665b9d--d5ddg-eth0" Apr 24 23:35:47.908857 containerd[1467]: 2026-04-24 23:35:47.897 [INFO][5561] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:35:47.908857 containerd[1467]: 2026-04-24 23:35:47.897 [INFO][5561] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:35:47.908857 containerd[1467]: 2026-04-24 23:35:47.902 [WARNING][5561] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" HandleID="k8s-pod-network.933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Workload="172--238--161--65-k8s-whisker--74b6665b9d--d5ddg-eth0" Apr 24 23:35:47.908857 containerd[1467]: 2026-04-24 23:35:47.903 [INFO][5561] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" HandleID="k8s-pod-network.933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Workload="172--238--161--65-k8s-whisker--74b6665b9d--d5ddg-eth0" Apr 24 23:35:47.908857 containerd[1467]: 2026-04-24 23:35:47.904 [INFO][5561] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:35:47.908857 containerd[1467]: 2026-04-24 23:35:47.906 [INFO][5554] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068" Apr 24 23:35:47.909224 containerd[1467]: time="2026-04-24T23:35:47.908910197Z" level=info msg="TearDown network for sandbox \"933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068\" successfully" Apr 24 23:35:47.913757 containerd[1467]: time="2026-04-24T23:35:47.913716511Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:35:47.913817 containerd[1467]: time="2026-04-24T23:35:47.913780680Z" level=info msg="RemovePodSandbox \"933eec268ec168cd7ae84924abccee4fbf4d83e163debb2538de6693a4670068\" returns successfully" Apr 24 23:35:53.002586 systemd[1]: run-containerd-runc-k8s.io-a41de66d53574e24320a20e18cc6464c1a96a78852729adfd24ad95972862d08-runc.rhXHby.mount: Deactivated successfully. Apr 24 23:36:00.783039 kubelet[2551]: E0424 23:36:00.783000 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:36:02.827662 kubelet[2551]: I0424 23:36:02.827276 2551 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:36:09.783651 kubelet[2551]: E0424 23:36:09.782924 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:36:17.783299 kubelet[2551]: E0424 23:36:17.783207 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:36:25.783817 kubelet[2551]: E0424 23:36:25.783065 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:36:32.782979 kubelet[2551]: E0424 23:36:32.782418 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:36:38.783365 kubelet[2551]: E0424 23:36:38.782123 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:36:46.783103 kubelet[2551]: E0424 23:36:46.783058 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:36:59.149026 systemd[1]: Started sshd@7-172.238.161.65:22-4.175.71.9:42158.service - OpenSSH per-connection server daemon (4.175.71.9:42158). Apr 24 23:36:59.761704 sshd[5851]: Accepted publickey for core from 4.175.71.9 port 42158 ssh2: RSA SHA256:qGAEp4xo5oyI2b9uarOwriHAiNNoDlNDl+jElKCVlVI Apr 24 23:36:59.763208 sshd[5851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:36:59.771325 systemd-logind[1447]: New session 8 of user core. Apr 24 23:36:59.777814 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 24 23:37:00.260643 sshd[5851]: pam_unix(sshd:session): session closed for user core Apr 24 23:37:00.266305 systemd[1]: sshd@7-172.238.161.65:22-4.175.71.9:42158.service: Deactivated successfully. Apr 24 23:37:00.269381 systemd[1]: session-8.scope: Deactivated successfully. Apr 24 23:37:00.270199 systemd-logind[1447]: Session 8 logged out. Waiting for processes to exit. Apr 24 23:37:00.271791 systemd-logind[1447]: Removed session 8. Apr 24 23:37:05.370375 systemd[1]: Started sshd@8-172.238.161.65:22-4.175.71.9:41292.service - OpenSSH per-connection server daemon (4.175.71.9:41292). Apr 24 23:37:05.974705 sshd[5864]: Accepted publickey for core from 4.175.71.9 port 41292 ssh2: RSA SHA256:qGAEp4xo5oyI2b9uarOwriHAiNNoDlNDl+jElKCVlVI Apr 24 23:37:05.978338 sshd[5864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:37:05.984610 systemd-logind[1447]: New session 9 of user core. Apr 24 23:37:05.989843 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 24 23:37:06.455474 sshd[5864]: pam_unix(sshd:session): session closed for user core Apr 24 23:37:06.459731 systemd-logind[1447]: Session 9 logged out. Waiting for processes to exit. Apr 24 23:37:06.460597 systemd[1]: sshd@8-172.238.161.65:22-4.175.71.9:41292.service: Deactivated successfully. Apr 24 23:37:06.463103 systemd[1]: session-9.scope: Deactivated successfully. Apr 24 23:37:06.464056 systemd-logind[1447]: Removed session 9. Apr 24 23:37:11.568142 systemd[1]: Started sshd@9-172.238.161.65:22-4.175.71.9:41302.service - OpenSSH per-connection server daemon (4.175.71.9:41302). Apr 24 23:37:12.172706 sshd[5911]: Accepted publickey for core from 4.175.71.9 port 41302 ssh2: RSA SHA256:qGAEp4xo5oyI2b9uarOwriHAiNNoDlNDl+jElKCVlVI Apr 24 23:37:12.174946 sshd[5911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:37:12.181357 systemd-logind[1447]: New session 10 of user core. Apr 24 23:37:12.188795 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 24 23:37:12.686788 sshd[5911]: pam_unix(sshd:session): session closed for user core Apr 24 23:37:12.692432 systemd[1]: sshd@9-172.238.161.65:22-4.175.71.9:41302.service: Deactivated successfully. Apr 24 23:37:12.695769 systemd[1]: session-10.scope: Deactivated successfully. Apr 24 23:37:12.696559 systemd-logind[1447]: Session 10 logged out. Waiting for processes to exit. Apr 24 23:37:12.698272 systemd-logind[1447]: Removed session 10. Apr 24 23:37:12.798980 systemd[1]: Started sshd@10-172.238.161.65:22-4.175.71.9:41306.service - OpenSSH per-connection server daemon (4.175.71.9:41306). Apr 24 23:37:13.439506 sshd[5943]: Accepted publickey for core from 4.175.71.9 port 41306 ssh2: RSA SHA256:qGAEp4xo5oyI2b9uarOwriHAiNNoDlNDl+jElKCVlVI Apr 24 23:37:13.441303 sshd[5943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:37:13.446612 systemd-logind[1447]: New session 11 of user core. Apr 24 23:37:13.452880 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 24 23:37:14.009236 sshd[5943]: pam_unix(sshd:session): session closed for user core Apr 24 23:37:14.015928 systemd[1]: sshd@10-172.238.161.65:22-4.175.71.9:41306.service: Deactivated successfully. Apr 24 23:37:14.020503 systemd[1]: session-11.scope: Deactivated successfully. Apr 24 23:37:14.024023 systemd-logind[1447]: Session 11 logged out. Waiting for processes to exit. Apr 24 23:37:14.025358 systemd-logind[1447]: Removed session 11. Apr 24 23:37:14.119709 systemd[1]: Started sshd@11-172.238.161.65:22-4.175.71.9:41310.service - OpenSSH per-connection server daemon (4.175.71.9:41310). Apr 24 23:37:14.730297 sshd[5970]: Accepted publickey for core from 4.175.71.9 port 41310 ssh2: RSA SHA256:qGAEp4xo5oyI2b9uarOwriHAiNNoDlNDl+jElKCVlVI Apr 24 23:37:14.732006 sshd[5970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:37:14.737642 systemd-logind[1447]: New session 12 of user core. Apr 24 23:37:14.743803 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 24 23:37:15.242992 sshd[5970]: pam_unix(sshd:session): session closed for user core Apr 24 23:37:15.249146 systemd[1]: sshd@11-172.238.161.65:22-4.175.71.9:41310.service: Deactivated successfully. Apr 24 23:37:15.252365 systemd[1]: session-12.scope: Deactivated successfully. Apr 24 23:37:15.253350 systemd-logind[1447]: Session 12 logged out. Waiting for processes to exit. Apr 24 23:37:15.254696 systemd-logind[1447]: Removed session 12. Apr 24 23:37:18.783723 kubelet[2551]: E0424 23:37:18.782656 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:37:18.783723 kubelet[2551]: E0424 23:37:18.783193 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:37:18.783723 kubelet[2551]: E0424 23:37:18.783263 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:37:20.360083 systemd[1]: Started sshd@12-172.238.161.65:22-4.175.71.9:40316.service - OpenSSH per-connection server daemon (4.175.71.9:40316). Apr 24 23:37:21.001730 sshd[6004]: Accepted publickey for core from 4.175.71.9 port 40316 ssh2: RSA SHA256:qGAEp4xo5oyI2b9uarOwriHAiNNoDlNDl+jElKCVlVI Apr 24 23:37:21.003753 sshd[6004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:37:21.009723 systemd-logind[1447]: New session 13 of user core. Apr 24 23:37:21.016000 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 24 23:37:21.546231 sshd[6004]: pam_unix(sshd:session): session closed for user core Apr 24 23:37:21.551995 systemd-logind[1447]: Session 13 logged out. Waiting for processes to exit. Apr 24 23:37:21.555521 systemd[1]: sshd@12-172.238.161.65:22-4.175.71.9:40316.service: Deactivated successfully. Apr 24 23:37:21.559541 systemd[1]: session-13.scope: Deactivated successfully. Apr 24 23:37:21.561132 systemd-logind[1447]: Removed session 13. Apr 24 23:37:21.665906 systemd[1]: Started sshd@13-172.238.161.65:22-4.175.71.9:40326.service - OpenSSH per-connection server daemon (4.175.71.9:40326). Apr 24 23:37:22.302096 sshd[6016]: Accepted publickey for core from 4.175.71.9 port 40326 ssh2: RSA SHA256:qGAEp4xo5oyI2b9uarOwriHAiNNoDlNDl+jElKCVlVI Apr 24 23:37:22.304562 sshd[6016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:37:22.308956 systemd-logind[1447]: New session 14 of user core. Apr 24 23:37:22.314794 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 24 23:37:23.256417 sshd[6016]: pam_unix(sshd:session): session closed for user core Apr 24 23:37:23.260171 systemd[1]: sshd@13-172.238.161.65:22-4.175.71.9:40326.service: Deactivated successfully. Apr 24 23:37:23.263640 systemd[1]: session-14.scope: Deactivated successfully. Apr 24 23:37:23.268910 systemd-logind[1447]: Session 14 logged out. Waiting for processes to exit. Apr 24 23:37:23.272149 systemd-logind[1447]: Removed session 14. Apr 24 23:37:23.369168 systemd[1]: Started sshd@14-172.238.161.65:22-4.175.71.9:40336.service - OpenSSH per-connection server daemon (4.175.71.9:40336). Apr 24 23:37:23.978717 sshd[6046]: Accepted publickey for core from 4.175.71.9 port 40336 ssh2: RSA SHA256:qGAEp4xo5oyI2b9uarOwriHAiNNoDlNDl+jElKCVlVI Apr 24 23:37:23.980379 sshd[6046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:37:23.987949 systemd-logind[1447]: New session 15 of user core. Apr 24 23:37:23.991847 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 24 23:37:24.952824 sshd[6046]: pam_unix(sshd:session): session closed for user core Apr 24 23:37:24.957370 systemd-logind[1447]: Session 15 logged out. Waiting for processes to exit. Apr 24 23:37:24.958320 systemd[1]: sshd@14-172.238.161.65:22-4.175.71.9:40336.service: Deactivated successfully. Apr 24 23:37:24.961286 systemd[1]: session-15.scope: Deactivated successfully. Apr 24 23:37:24.962252 systemd-logind[1447]: Removed session 15. Apr 24 23:37:25.058109 systemd[1]: Started sshd@15-172.238.161.65:22-4.175.71.9:40348.service - OpenSSH per-connection server daemon (4.175.71.9:40348). Apr 24 23:37:25.662121 sshd[6072]: Accepted publickey for core from 4.175.71.9 port 40348 ssh2: RSA SHA256:qGAEp4xo5oyI2b9uarOwriHAiNNoDlNDl+jElKCVlVI Apr 24 23:37:25.663820 sshd[6072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:37:25.670721 systemd-logind[1447]: New session 16 of user core. Apr 24 23:37:25.677804 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 24 23:37:25.991467 systemd[1]: run-containerd-runc-k8s.io-53eb6e67eeedee9131b387ddb730167e27dee32dc51a21d53f9654d30e2daaf2-runc.CTE7uN.mount: Deactivated successfully. Apr 24 23:37:26.287419 sshd[6072]: pam_unix(sshd:session): session closed for user core Apr 24 23:37:26.292277 systemd[1]: sshd@15-172.238.161.65:22-4.175.71.9:40348.service: Deactivated successfully. Apr 24 23:37:26.295068 systemd[1]: session-16.scope: Deactivated successfully. Apr 24 23:37:26.297652 systemd-logind[1447]: Session 16 logged out. Waiting for processes to exit. Apr 24 23:37:26.299585 systemd-logind[1447]: Removed session 16. Apr 24 23:37:26.406967 systemd[1]: Started sshd@16-172.238.161.65:22-4.175.71.9:58620.service - OpenSSH per-connection server daemon (4.175.71.9:58620). Apr 24 23:37:27.053128 sshd[6104]: Accepted publickey for core from 4.175.71.9 port 58620 ssh2: RSA SHA256:qGAEp4xo5oyI2b9uarOwriHAiNNoDlNDl+jElKCVlVI Apr 24 23:37:27.054987 sshd[6104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:37:27.060605 systemd-logind[1447]: New session 17 of user core. Apr 24 23:37:27.065806 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 24 23:37:27.561003 sshd[6104]: pam_unix(sshd:session): session closed for user core Apr 24 23:37:27.565065 systemd[1]: sshd@16-172.238.161.65:22-4.175.71.9:58620.service: Deactivated successfully. Apr 24 23:37:27.567379 systemd[1]: session-17.scope: Deactivated successfully. Apr 24 23:37:27.568386 systemd-logind[1447]: Session 17 logged out. Waiting for processes to exit. Apr 24 23:37:27.569218 systemd-logind[1447]: Removed session 17. Apr 24 23:37:32.672904 systemd[1]: Started sshd@17-172.238.161.65:22-4.175.71.9:58630.service - OpenSSH per-connection server daemon (4.175.71.9:58630). Apr 24 23:37:33.283226 sshd[6140]: Accepted publickey for core from 4.175.71.9 port 58630 ssh2: RSA SHA256:qGAEp4xo5oyI2b9uarOwriHAiNNoDlNDl+jElKCVlVI Apr 24 23:37:33.284225 sshd[6140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:37:33.289288 systemd-logind[1447]: New session 18 of user core. Apr 24 23:37:33.298825 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 24 23:37:33.779579 sshd[6140]: pam_unix(sshd:session): session closed for user core Apr 24 23:37:33.784252 systemd-logind[1447]: Session 18 logged out. Waiting for processes to exit. Apr 24 23:37:33.784734 systemd[1]: sshd@17-172.238.161.65:22-4.175.71.9:58630.service: Deactivated successfully. Apr 24 23:37:33.788568 systemd[1]: session-18.scope: Deactivated successfully. Apr 24 23:37:33.790845 systemd-logind[1447]: Removed session 18. Apr 24 23:37:36.783418 kubelet[2551]: E0424 23:37:36.783296 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:37:38.906990 systemd[1]: Started sshd@18-172.238.161.65:22-4.175.71.9:53336.service - OpenSSH per-connection server daemon (4.175.71.9:53336). Apr 24 23:37:39.539970 sshd[6153]: Accepted publickey for core from 4.175.71.9 port 53336 ssh2: RSA SHA256:qGAEp4xo5oyI2b9uarOwriHAiNNoDlNDl+jElKCVlVI Apr 24 23:37:39.541771 sshd[6153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:37:39.546548 systemd-logind[1447]: New session 19 of user core. Apr 24 23:37:39.551791 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 24 23:37:40.065533 sshd[6153]: pam_unix(sshd:session): session closed for user core Apr 24 23:37:40.071478 systemd[1]: sshd@18-172.238.161.65:22-4.175.71.9:53336.service: Deactivated successfully. Apr 24 23:37:40.074697 systemd[1]: session-19.scope: Deactivated successfully. Apr 24 23:37:40.075728 systemd-logind[1447]: Session 19 logged out. Waiting for processes to exit. Apr 24 23:37:40.077156 systemd-logind[1447]: Removed session 19. Apr 24 23:37:45.181151 systemd[1]: Started sshd@19-172.238.161.65:22-4.175.71.9:53344.service - OpenSSH per-connection server daemon (4.175.71.9:53344). Apr 24 23:37:45.814217 sshd[6166]: Accepted publickey for core from 4.175.71.9 port 53344 ssh2: RSA SHA256:qGAEp4xo5oyI2b9uarOwriHAiNNoDlNDl+jElKCVlVI Apr 24 23:37:45.815915 sshd[6166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:37:45.821764 systemd-logind[1447]: New session 20 of user core. Apr 24 23:37:45.827864 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 24 23:37:46.320211 sshd[6166]: pam_unix(sshd:session): session closed for user core Apr 24 23:37:46.329473 systemd[1]: sshd@19-172.238.161.65:22-4.175.71.9:53344.service: Deactivated successfully. Apr 24 23:37:46.333570 systemd[1]: session-20.scope: Deactivated successfully. Apr 24 23:37:46.335528 systemd-logind[1447]: Session 20 logged out. Waiting for processes to exit. Apr 24 23:37:46.337038 systemd-logind[1447]: Removed session 20. Apr 24 23:37:51.428269 systemd[1]: Started sshd@20-172.238.161.65:22-4.175.71.9:43630.service - OpenSSH per-connection server daemon (4.175.71.9:43630). Apr 24 23:37:51.783626 kubelet[2551]: E0424 23:37:51.782726 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:37:51.783626 kubelet[2551]: E0424 23:37:51.783328 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Apr 24 23:37:52.032724 sshd[6203]: Accepted publickey for core from 4.175.71.9 port 43630 ssh2: RSA SHA256:qGAEp4xo5oyI2b9uarOwriHAiNNoDlNDl+jElKCVlVI Apr 24 23:37:52.034764 sshd[6203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:37:52.040899 systemd-logind[1447]: New session 21 of user core. Apr 24 23:37:52.045805 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 24 23:37:52.531731 sshd[6203]: pam_unix(sshd:session): session closed for user core Apr 24 23:37:52.536659 systemd[1]: sshd@20-172.238.161.65:22-4.175.71.9:43630.service: Deactivated successfully. Apr 24 23:37:52.539519 systemd[1]: session-21.scope: Deactivated successfully. Apr 24 23:37:52.540211 systemd-logind[1447]: Session 21 logged out. Waiting for processes to exit. Apr 24 23:37:52.541507 systemd-logind[1447]: Removed session 21. Apr 24 23:37:52.983663 systemd[1]: run-containerd-runc-k8s.io-a41de66d53574e24320a20e18cc6464c1a96a78852729adfd24ad95972862d08-runc.mdFIat.mount: Deactivated successfully. Apr 24 23:37:53.783507 kubelet[2551]: E0424 23:37:53.782646 2551 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20"